Jump to content

Wild speculations for fun and science!


CodeGlitch0
 Share

Recommended Posts

 

But my point is, the IP addresses behind the DNS aren't going to be changing much while the server is running. Once an attacker knows where your server is, all he has to do is get enough traffic to concentrate somewhere in the logical viscinity of your server, and he'll bring that whole region of the network down.

 

So within your system I could:

1. Set up packet capture locally.

2. Play the game normally, like any other legitimate customer would.

3. Obtain the NQ server(s) IP(s) by reading the traffic leaving my machine.

4. Use my C&C to issue an order to my botnet to nuke NQ servers. NQ servers are now offline.

5. ????

6. Profit.

Lesson? IP addresses aren't secrets, and you can't treat them as such in this kind of model.

 

Point being, that there is no easy defense against DDoS, all the mitigation systems I know of primarily work based on distributing the load of the attack, but you can't do that if you just have the one or two servers. Only players like Google can do that sort of thing.

 

Given your example:

 

At least two IPs are known to you, the hacker.  The initial authenication server and the current "sector server" (the 3D sector you're in)

If you attacked my "authentication server", I could drop all traffic to that server and the game clients would continue to connect to another authentication server IP.  Given the hosting provider, the IPs wouldn't have to sit on the same subnet, so they would use completely different routes and network resources.  So the DDOS packets wouldn't affect the other "authentication servers" on the list.  The packets could also be dropped WAY upstream.

 

The "sector server" could be impacted but it would only effect a small number of users, and I addressed those concerns in my original post.  Including determining WHICH game client was the actual hacker so they could be banned.

Link to comment
Share on other sites

The "sector server" could be impacted but it would only effect a small number of users, and I addressed those concerns in my original post.  Including determining WHICH game client was the actual hacker so they could be banned.

 

All sectors hosted on that same server/datacenter would be affected, since my attack would be saturating their uplink. There are many bandwidth amplification attacks that make it kind of easy to block traffic in an entire region local to that endpoint (attacks of 10's of Gb/s of size are pretty common, several hundred Gbps not unheard of -- that will drown out an entire datacenter's traffic).

 

[EDIT]

And I think someone pointed out that they won't be hosting in several places across the globe, just in one or two places.

 

If they want to not be subject to smaller DDoS attacks, they'll have to hire someone to do mitigation for them.

Link to comment
Share on other sites

Your attacks are on addresses (and their resources) that you are aware of.

 

I can procedurally create as many server names and IP addresses as I need. Even IPs hosted by completely different providers.

 

For every address you know about I can create 100.

 

Now I've taken your argument to an extreme, which isn't realistic. But your arguments haven't made a dent in my solution.

 

I've taken a central point of failure and a bottleneck and distributed it across a redundant solution.

 

That's something that's not possible with websites.

 

Yes. Given the size of the botnet, my solution can be taken down.

 

But it's decentralized AND redundant. And much more difficult than taking down the Whitehouse's website.

Link to comment
Share on other sites

Your attacks are on addresses (and their resources) that you are aware of.

 

I can procedurally create as many server names and IP addresses as I need. Even IPs hosted by completely different providers.

 

For every address you know about I can create 100.

 

Now I've taken your argument to an extreme, which isn't realistic. But your arguments haven't made a dent in my solution.

 

I've taken a central point of failure and a bottleneck and distributed it across a redundant solution.

 

That's something that's not possible with websites.

 

Yes. Given the size of the botnet, my solution can be taken down.

 

But it's decentralized AND redundant. And much more difficult than taking down the Whitehouse's website.

 

Wait, how are you creating IPs? In the system I'm familiar with, IP addresses are assigned to you by IANA, via your service provider -- you can't just make them up, right? And even if you select a different IP from a block of IPs assigned to you, the attack will still reach the link between the closest router to your endpoint, and saturate it, until you get your ISP to start dropping packets at the edges of their network that are trying to ingress to the address under attack, at which point the attacker just needs to get the new IP.

 

You can pick another IP (let's assume we're on IPv6 and you have a large block to choose from), but when your service is back up, I connect with my legit user client again, get the new IP and start attacking that instead. And so on.

Link to comment
Share on other sites

I'm pretty much done here.

 

If you'd like to know how to create IPs in a cloud environment, google "Amazon elastic IPs."

 

It doesn't matter whether NQ uses Amazon, or some other party to host their servers.  Any decent hosting company has an equivelant solution.

Link to comment
Share on other sites

I'm pretty much done here.

 

If you'd like to know how to create IPs in a cloud environment, google "Amazon elastic IPs."

 

It doesn't matter whether NQ uses Amazon, or some other party to host their servers.  Any decent hosting company has an equivelant solution.

 

So you're presupposing they're behind some kind of DDoS mitigation service, the kind that I mentioned in the post that you were quoting?

 

I think we're in agreement, then.

Link to comment
Share on other sites

So you're presupposing they're behind some kind of DDoS mitigation service, the kind that I mentioned in the post that you were quoting?

 

I think we're in agreement, then.

 

In your typical service provider patterns, yes you're assigned personal IP addresses.  But in a cloud situation, providers like Amazon and Azure, etc are assigned huge blocks of IP addresses which are used as needed by various clients.  If you're using static, very long-term IPs in a cloud provider, you're probably not making full and true use of the full power of cloud.

 

The full power of cloud comes from throwaway virtual machines and dynamic, fluid hyperscale capabilities.

Link to comment
Share on other sites

In your typical service provider patterns, yes you're assigned personal IP addresses.  But in a cloud situation, providers like Amazon and Azure, etc are assigned huge blocks of IP addresses which are used as needed by various clients.  If you're using static, very long-term IPs in a cloud provider, you're probably not making full and true use of the full power of cloud.

 

The full power of cloud comes from throwaway virtual machines and dynamic, fluid hyperscale capabilities.

 

I guess I was starting with the assumption that NQ would build their backend entirely by themselves. Is it more typical game servers (for the biggest games) get hosted by someone else? I'm guessing MS and other similar actors in this market provide tools to their customers that allow them to manipulate some well defined part of MS's cloud infrastructure?

 

I'm more used to the kinds of systems where you do all the hosting yourself. So you just buy a connection in a datacenter, build a machine and park it there, or buy servers that are already in place.

 

I thought you guys were talking about NQ building the cloud infrastructure themselves between the hardware they'd own, rather than building on an already existing layer of hardware and higher level protocols.

Link to comment
Share on other sites

To buy and build a cloud yourself would be insane.  The kind of infrastructure that DU would require would have enormous front-end cost.  Certainly, at least 100s of thousands, but in the millions probably.  NQ has stated (referenced earlier on this thread) that they will be using a cloud service. At launch, at least. So my speculations assume they make the sane choice: a pre-existing cloud, with pre-made and configured core infrastructure as with something like Azure or Amazon cloud, and not reinvent the wheel. If, by chance, they do decide to try to create their own cloud.. I would be willing to make bets that initial launch would be a complete catastrophe.

 

The intention was an in-depth software/developer discussion, moreso than IT.
Link to comment
Share on other sites

 

To buy and build a cloud yourself would be insane.  The kind of infrastructure that DU would require would have enormous front-end cost.  Certainly, at least 100s of thousands, but in the millions probably.  NQ has stated (referenced earlier on this thread) that they will be using a cloud service. At launch, at least. So my speculations assume they make the sane choice: a pre-existing cloud, with pre-made and configured core infrastructure as with something like Azure or Amazon cloud, and not reinvent the wheel. If, by chance, they do decide to try to create their own cloud.. I would be willing to make bets that initial launch would be a complete catastrophe.
 
The intention was an in-depth software/developer discussion, moreso than IT.

 

 

Yeah, that does seem reasonable.

 

I'm not sure how we veered this far off-topic, though I'm not sure what more to speculate about their backend... Specific implementations of various microservices and such, assuming your Azure model?

Link to comment
Share on other sites

Yeah, that does seem reasonable.

 

I'm not sure how we veered this far off-topic, though I'm not sure what more to speculate about their backend... Specific implementations of various microservices and such, assuming your Azure model?

 

I'm currently working on a diagram of the model.  I'll post in the next day or so hopefully.  Then we can rip that apart until we have something that feels like it might be a viable back-end for DU.  Then I will probably tinker with an implementation of it for performance checking and we can go from there.

Link to comment
Share on other sites

I'm currently working on a diagram of the model.  I'll post in the next day or so hopefully.  Then we can rip that apart until we have something that feels like it might be a viable back-end for DU.  Then I will probably tinker with an implementation of it for performance checking and we can go from there.

 

Oh you're going way more ham on this than I am, or that I anticipated you'd be. Godspeed I guess. o.o

Link to comment
Share on other sites

Oh you're going way more ham on this than I am, or that I anticipated you'd be. Godspeed I guess. o.o

 

Yes, well, I'm something of a nerd.  Talking about it got me wondering what the actual latency on something like this might be.  So, now I want to put together a quick test to see how it actually might perform...

 

EDIT: I'm also a hobbyist/aspiring game dev (at some point in my life, not that I'll ever get around to it).  But something like this may actually end up useful?

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...