An Engineering Update on the Dragonflight Launch

With Dragonflight’s recent launch behind us, we want to take some time to talk with you more about what occurred these past few days from an engineering viewpoint. We hope that this will provide a bit more insight on what it takes to make a global launch like this happen, what can go right, what hiccups can occur along the way, and how we manage them.

Internally, we call events like last Monday “content launch,” because launching an expansion is a process, not one day. Far from being a static game running the same way it did eighteen years ago—or even two years ago—World of Warcraft is in constant change and growth, and our deployment processes change as well.

Expansions now consist of several smaller launches: the code first goes live running the old content, then pre-launch events and new systems turn on, and finally, on content launch day, new areas, quests, and dungeons. Each stage changes different things so we can find and fix problems. But in any large, complex system, the unexpected can still occur.

One change with this expansion was that the content launch was triggered using a timed event —multiple changes to the game can be triggered to all happen at a particular time. Manually making these changes carries the risk of human error, or an internal or external tool outage. Using a timed event helps to mitigate these risks.

Another change in Dragonflight: greatly enhanced support for encrypting game data records. Encrypted records allow us to send out our client with the data that the game needs to show cutscenes, share voice lines, or unlock quests, but keep that data from being mined before players get to experience them in-game. We know the community loves WoW, and when you’re hungry to experience any morsel, it’s hard to not spoil yourself before the main course. Encrypted records allow us to take critical story beats and hide them from players until the right time to reveal them.

We now know that the lag and instability we saw last week was caused by the way these two systems interacted. The result was: they forced the simulation server (that moves your characters around the world and performs their spells and abilities) to recalculate which records should be hidden more than one hundred times a second, per simulation. As a great deal of CPU power was spent doing these calculations, the simulations became bogged down, and requests from other services to those simulation servers backed up. Players see this as lag and error messages like “World Server Down”.

As we discovered, records encrypted until a timed event unlocked them exposed a small logic error in the code: a misplaced line of code signaled to the server that it needed to recalculate which records to hide, even though nothing had changed.

Here’s some insight on how that investigation occurred. First, the clock struck 3:00 p.m. PST. We know from testing that the Horde boat arrives first, and the Alliance boat arrives next. Many of us are logged in to the game on our characters sitting on the docks in both locations in one computer window, watching logs or graphs or dashboards in other windows. We’re also on a conference call with colleagues from our support teams from all over Blizzard.

Before launch, we’ve created contingency plans for situations we’re worried about as a result of our testing. For example, for this launch, our designers created portals that players could use to get to the Dragon Isles in case the boats failed to work.

At 3:02 p.m. the Horde boat arrives on schedule. Hooray! Players pile on, including some Blizzard employees. Other employees wait (they want to be test cases in case we must turn on portals.) The players on the boats sail off, and while some do arrive on the Dragon Isles, many more are disconnected or get stuck.

Immediately we start searching logs and dashboards. There are some players on the Dragon Isles map, but not many. Colleagues having issues report their character names and realms as specific examples. Others start reporting spikes in CPU load and on our NFS (Network File Storage) that our servers use. Still others are watching in-game, reporting what they see.

Now that we’ve seen the Horde boats, we start watching for the Alliance boats to arrive. Most of them don’t, and most of the Horde boats do not return.

A picture emerges: the boats are stuck, and Dragon Isles servers are taking much longer to spin up than expected. Here’s where we really dig in and start to problem solve.

Boats have been a problem in the past, so we turn on portals while we continue investigating. Our NFS is clearly overloaded. There’s a large network queue on the service responsible for coordinating the simulation servers, making it think simulations aren’t starting, so it launches more and starts to overwhelm our hardware. Soon we discover that adding the portals has made the overload worse, because players can click the portals as many times as they want, so we turn the portals off.

As the problems persist, we work on tackling the increased load to get as many players in to play as possible, but the service is not acting like it did in pre-launch tests. We continue to problem-solve the issue and discount things we know aren’t the issue based on those tests.

Despite the lateness in the day, many continue to work while others take off to get rest so they can return early the following day to get a fresh start and relieve those who will work overnight.

By Tuesday morning, we have a better understanding of things. We know we’re sending more messages to clients about quests than usual, although later discoveries will reveal this isn’t causing problems. A new file storage API we’re using is hitting our file storage harder than usual. Some new code added for quest givers to beckon players seems slower than it should be. The service is taking a very long time to send clients all the data changes made in hotfixes. Reports are coming in that the players who have gotten to the Dragon Isles playing have started experiencing extreme lag.

Mid-Tuesday morning a coincidence happens: digging deep into the new beckon code we find hooks for the new encryption system. We start looking at the question from the other side —could the encryption system being slow explain these and other issues we’re seeing? As it turns out, yes it can. The encryption system being slow explains the hotfix problem, the file storage problem, and the lag players are experiencing. With the source identified, the author of the relevant part of the system was able to identify the error and make the needed correction.

Pushing a fix to code used across so many services isn’t like flipping a switch, and new binaries must be pushed out and turned on. We must slowly move players from the old simulations to new ones for the correction to be picked up. In fact, at one point we try to move players too quickly and cause another part of the service to suffer. Some of the affected binaries cannot be corrected without a service restart, which we delay until the fewest players are online to not disrupt players who were in the game. By Wednesday, the fix was completely out and service stability dramatically improved.

While it took some effort to identify the issue and get it fixed, our team was incredibly vigilant in investigating the issue and getting it corrected as quickly as possible. Good software engineering isn’t about never making mistakes; it’s about minimizing the chances of making them, finding them quickly when they happen, having the tools to get in the fixes right away…

…and having an amazing team to come together to make it all happen.


—The World of Warcraft Engineering Team


140 Likes

This is so cool! I love to see engineering explanations like this, and would love you to dive even deeper into technicalities if at all possible. As a fellow Software Engineer these are so interesting

30 Likes

I love reading these types of posts. They’re similar to the dev water coolers, I hope you resume making more of these posts!

12 Likes

Thank you for this post.

I know this launch experience wasn’t as smooth as Legion, BFA or Shadowlands but the problems got solved relatively quick. Thank you for sharing this with us and thank you for all the hard work you guys do!

11 Likes

Thank you for taking the time to put this post together, it’s interesting to see a little of what goes on behind the scenes :+1:

7 Likes

Good reading, but this got me chuckle :rofl:
Its like some dramatic quest from 8.2 :smile:

12 Likes

I really love this kind of engineering posts ! As a system engineer myself i know how things can go wrong unexpected very quickly and every minute counts to fix things in production environment.

3 Likes

I thought this thread would be about the Engineering profession, kek

But good to know you guys are checking this stuff up. Yesterday I already noticed better response from the game than at the beginning of the week

1 Like

As a software engineer myself, THANKS FOR THIS POST. Seriously. The launch was really frustrating, specially for us taking leave from work the next day.

But this post made me understad you better, what the engineering team went through that day/night. It was a really giving lecture, not only to know why the game was in that state, but also as a lesson for myself on releasing software.

Thanks!

3 Likes

I commend the transparency here. But as a customer, I would like to know why this wasn’t stress tested to find this out before it blew up production.

I understand this is complicated. I understand you can have strange interactions between code, and that stress testing cannot catch everything. But this does seem to boil down to “we changed a bunch of stuff, and it wasn’t properly stress tested”.

People can make mistakes and coding errors are always going to be made. But why did the servers not get hammered during internal testing to be able to replicate this? Because that would seem to suggest a bit failure in the QA process.

The detail and openness is very much appreciated and from a different part of the tech world I really understand the pain and pressure you were all under.

2 Likes

Thank you for sharing <3

2 Likes

Oh you’re talking about real engineering, not the in game one.

1 Like

Really love posts like this. Please continue to do these technical style posts in the future where appropriate!

1 Like

Most probably - the pure scale cannot be replicated, and signs of issues like this cannot be pinpointed on the smaller slace testing of alpha and beta. Creating replica or AI players to simulate a crossing like this requires infrastructure that is too large to be viable for purely testing purposes.

Made my day :laughing:

As others have said…that’s a nice insight on what happens behind the scenes and for a software engineer myself it is a good reminder on where to assume hidden bottlenecks (in this case the encryption system).

As I understand it the problem here was some kind of resonance-cascade-faliure which didn’t occur on test servers because you simply cannot replicate an event at such a large scale.
There were thousands of people trying to get into DF at the same time so that encryption routine that had seemed to work fine was putting too much load on the file system which starts getting slower and slower with each new pending request.
Then some kind of load balancer detects that there is lots of lag and requests more resources.
More players are able to connect and get thrown at the bottleneck of the file system which gets slower again.
Then (and that’s a human error) the GMs invoke their contingency plan for “non working ships” and throw those people that are not yet stressing the file system also onto there and instead of solving the problem they even worsened it.

I don’t see a realistic scenario on how that effect could have been properly stress tested.

2 Likes

the on update does not work I can’t play the new World of Warcraft like that I’d like to play again
Was engaged in PvP Here I am not paying for a game that does

Thank you guys for the effort on this one. We all appreciate the blue post as well. Keep up the good work!

1 Like