Jump to content

CodeGlitch0

Alpha Tester
  • Posts

    420
  • Joined

  • Last visited

Everything posted by CodeGlitch0

  1. The balance of the game is focused on people with 8-20 hours a day to play the game, not casual players. So, your best option at this point is to join an active organization that can provide you with resources for designing things. Going solo is not really an option for casual players. Unfortunately, this is by design.
  2. It is because NQ systems could not keep up with the level of industry people were creating after beta launch. So they increased minimum batch sizes to lessen the number of server calls required and save the servers. It is mostly likely a temporary measure, but we do not know for sure or for how long.
  3. As @Helediron said, just push the button. If you look a little closer at the screenshot you posted, you will see it says "[saved email]" and "[saved password]" in the fields, indicating it doesn't need to be re-entered. The values are hidden for security. In particular, for people that stream the game.
  4. Previous reply was a bit terse and could use additional explanation... 9800N will cancel the effect of gravity if aligned in opposing direction to gravity. It will do nothing else. If you want to move anything (additional acceleration), you need multiples of that. How much depends on what you are tryong to achieve. 4X (4g) is probably about minimum, 10g gives most flexibility.
  5. At 1g, it takes about 9800 N to cancel out (not lift!) 1000kg of mass.
  6. This should be moved to NDA section, but the problem was that you didn't balance your ship properly. Extra weight (interia) exacerbated the problem.
  7. "Directly" being the operative word, I guess. Voxel structures aren't based on edges and vertices like most 3D models. Edges and vertices for display are derived from voxel data. That doesn't mean it can't be done, it requires something to do the conversion. In fact, the NQ team did just that to import models into the game for demo videos, etc. But, by nature of the gameplay, that won't be available to the general player. We'll have to manipulate the voxels the old fashioned way.
  8. @Thainz: "CAD" functionality such as sketching, dimensioning, and measuring aren't directly compatible with a voxel engine. You could use a CAD system (such as onshape.com, which I use) to draw up a schematic, but you'll still have to translate that into voxels yourself. It wouldn't automatically convert in-game. Sure, if you were a NQ employee, you could use their tools to import 3D models into voxels, but that won't be available in game. However, you can certainly use an engineering schematic created in an external CAD program with measurements to be far more precise with your voxel placements to make the most precise final version possible..
  9. Yes, well, I'm something of a nerd. Talking about it got me wondering what the actual latency on something like this might be. So, now I want to put together a quick test to see how it actually might perform... EDIT: I'm also a hobbyist/aspiring game dev (at some point in my life, not that I'll ever get around to it). But something like this may actually end up useful?
  10. I'm currently working on a diagram of the model. I'll post in the next day or so hopefully. Then we can rip that apart until we have something that feels like it might be a viable back-end for DU. Then I will probably tinker with an implementation of it for performance checking and we can go from there.
  11. To buy and build a cloud yourself would be insane. The kind of infrastructure that DU would require would have enormous front-end cost. Certainly, at least 100s of thousands, but in the millions probably. NQ has stated (referenced earlier on this thread) that they will be using a cloud service. At launch, at least. So my speculations assume they make the sane choice: a pre-existing cloud, with pre-made and configured core infrastructure as with something like Azure or Amazon cloud, and not reinvent the wheel. If, by chance, they do decide to try to create their own cloud.. I would be willing to make bets that initial launch would be a complete catastrophe. The intention was an in-depth software/developer discussion, moreso than IT.
  12. In your typical service provider patterns, yes you're assigned personal IP addresses. But in a cloud situation, providers like Amazon and Azure, etc are assigned huge blocks of IP addresses which are used as needed by various clients. If you're using static, very long-term IPs in a cloud provider, you're probably not making full and true use of the full power of cloud. The full power of cloud comes from throwaway virtual machines and dynamic, fluid hyperscale capabilities.
  13. Yeah, there are definitely options out there. But I imagine something like DU won't actually require a huge amount of network traffic per client. Latency will be first an foremost. But without a requirement for twitch combat (ie. FPS shooter), the traffic will likely be mostly game events (voxels added/broken, shot fired/target and player movement. It can probably be accomplished within 50-100 kbps (~5-10 KB per second) on average per player. I could be way off on this number, but hey! It's wild speculation day! Also, an ingress layer helps immensely with that. The responsibility of those nodes is solely: accept traffic, decrypt it, validate/authenticate it, pass on to game services. The major difference between cloud/microservices and tradional game servers is that the cloud doesn't require every single bit of processing to happen on a single node. So, you aren't required to micromanage performance issues in the same regard as traditional. The distribution means you can handle a lot more than normal. EDIT: But you are right. UDP is generally better for things like gaming. When the game logic is primarily server-based though, it might become a different beast. Dropped packets, except for movement and the like, can be really bad in that scenario. Possibly a hybrid/dual streams is the best choice.
  14. I'm talking in the context of a "dumb" ingress layer. Yes, at the TCP level, packets would be ordered and retry as normal. But once through to the ingress nodes, that traffic is authenticated, validated, decrypted, and passed on to the actual game logic nodes. It is at this level that I image a queuing system might be necessary. If you have 100 ingress nodes and no queuing, that is a lot of asynchronous traffic (100 streams) being dumped into the game services (let's say 20 server nodes) and it all needs to be handled and processed at once. By implementing a queue (either in process or separate/distributed) you can quickly accept that traffic, then process it as quickly as possible without fully flooding the services. If you use something like Azure Service Bus or Event Hubs (again, sorry for always posting M$ tech), then those cloud services can ingest the millions of messages per second and use Message Queue "topics" to intelligently route to only the nodes that requires the message. instead of the connected node receiving EVERY message, always. Again, it's a trade-off between raw performance or adding a bit of latency to be more intelligent and reduce overall traffic ingestion on specific nodes.
  15. I apologize for constantly going back to Micro$oft tech. I use M$ dev stack every day, so it is what I know best. There can be a number of benefits to having a level of protection at the DNS level. With the advent of things like Azure DNS, you can go even further than just security with it. In the cloud, IP addresses can be very fluid. There isn't a whole lot of need to hold on to a single public IP for long periods of time, unless that is how your system is designed. In microservices approach, they can come and go and it wouldn't really matter. DNS protection would only help with the initial connection attempt for each client. Most clients will cache results and then just use the same given IP until something fails and it attempts a reconnect. Diving in... What can you achieve with a DNS layer protection system? A couple of things. Primarily: automatic geographic distribution, initial load balancing, and DDoS protection. Because IPs can be in flux, a cloud-based DNS server can protect against DNS-level DoS attack by simply blocking evil IPs from the DNS layer. That prevents a level of DoS traffic from even getting to your app. This is more beneficial if IP change regularly. However, most app public access points are fairly static. Secondly is geographic distribution and load balancing. In something like the cloud DNS, you can have the server know about a number of affinity points for your app. (read: datacenter location specific) Lets say you have "connect.dualthegame.com" as your primary connection point. This service is distributed amongst 30 nodes/IPs in 3 geographically dispersed data centers. The DNS layer can know that you also have "america.connect.dualthegame.com," "europe.connect.dualthegame.com," and "asia.connect.dualthegame.com." When that initial IP resolution query comes in, the DNS service can look at your IP, check it's geographic location and return the appropriate CNAME for your region. It is also possible to select based on latency, I believe. Each of the geographic CNAMES has a list of IP addresses for each cloud load balancer on the connection point for that datacenter. When multiple IPs are returned, the client will essentially just choose one at random and connect to that. This gives a degree of initial load balancing before your client even tried an actual connection to the app. The malicious botnet will be mitigated in some respect on that initial connection attempt, if it uses DNS resolution. The cloud DNS/front-end load balancers can pick up a list evil IP based on network traffic and prevent before it even touches your app. The load balancer denial also protects your app from direct IP address attacks. If the hackers get through even that, the ingress layer has it own protection now, as I mentioned, because individual nodes are fluid and disposable.
  16. @Ripper: Those are some great points! The addition of ANYCAST perhaps through something like new cloud DNS services) would definitely be required for the initial connections and help greatly with geographic distribution and DDoS protection. That would pair wonderfully with the Authentication/Connection reservation/encryption key generation I mentioned. This brought to mind a couple of other benefits about having an abstracted ingress gateway layer. The ingress layer would be responsible for all encryption/decryption of over the wire traffic to players and would easily reject any other connections attempts that have not been reserved. If they we targetted by DDoS attacks, it would be easy to scale up additional instances and just dump the attacked nodes. Player connections would simply reconnect to a new ingress node and they would only notice some lag until automatic failover happens and the blip of that reconnection attempt. Everything else would happen behind that ingress layer in traffic that is inaccessible publicly, and actual regions wouldn't actually be affected. The primary downside would be a slight addition to latency, but because it is all part of the cluster it is pretty much just traffic over a local network switch, so we're talking singles or less of ms. @LurkNautili: Connections between nodes would be direct yes, but messages would need to be queued once received to ensure they happen sequentially. But once in the queue, they can be processed and acted on asynchronously, with some intelligence to resolve conflicts. Individual services, such as the region, can also make use of replicas to allow certain read operations to occur across nodes, while write operations would happen on a single primary node to protect the system integrity. A lot of this can just happen automatically thanks to the cloud services / service fabric layer. Because data-center connections are much more likely to be consistent and predictable than player connections, geographic distribution and synchronization might actually make a lot of sense in a cloud situation. There might be some lag for events from players in one RL region to players in another RL region, but that would happen regardless of implementation. It is inevitable. Distributing RL geographic regions via only the cloud, would likely actually normalize a lot of those inconsistencies. Now you got me thinking even deeper. Perhaps I need a diagram... That'll be later.
  17. Yeah, I was going to cross-post to the Kickstarter, but I figured there would be far less chance of it getting buried on the forums and I didn't want to duplicate questions. Thank you for responding, though. Sorry it became necessary. I should have noticed the answer in the KS update, but I glanced over that line apparently. My mistake, and I am happy to own up to it. It's unfortunate that the post exploded, when I just looking for a response or an official source. I'm kind of glad that this discussion didn't happen on the KS page though. Anyways.... Moving on! Thanks again.
  18. Funny, it sounds like you are trying to start an argument. A simple like on his post would have sufficed. Fortunately, I won't bite.
  19. Thank you! You are a gem. You saved me from having to write a lengthy response that should never have been necessary anyway. I will put this link on the opening post.
  20. I got an "access key" to download World of Warcraft at one point. That didn't mean I could play it without a subscription. It just meant I could download the client/game for it. NQ-Nyzaltar is the community manager for DU. It is his/her (I'm not sure which is correct) job to monitor the community (which includes the forum, last I checked) and provide responses on NQ's behalf. I assume he/she is being paid for that job. If not, he/she is being shafted by NQ. So, no. I don't feel bad asking for a public response from the person dedicated to managing community questions about DU.
  21. I appreciate that. But I'd also like to point out that posting on the official NovaQuark forum for Dual Universe, asking for a response from NovaQuark IS contacting the developer.
  22. I don't believe it would work to have them communicate directly. That'd be too much traffic, essentially bottlenecking it back down to the problem you get with a single machine. Instead, everything would need to go through some sort of event router service. The service would have a Level of Detail algorithm to intelligently route messages to valid regions (and players) on a scaled back frequency by distance and importance of the message. Adjacent regions to the source would get the messages on a semi-frequent basis, further regions would get even less frequent, and even further regions may not get them at all. You would also need to take into account that region can be really small in that router. So region size would also be a factor in messaging frequency. The regions could be comparable to the old-style notion of a server shard, to a degree. A region knows about all the players in it, and all players know the region they are in. This is an in-game region, a chunk of the universe space. The region is also responsible for all the players, NPCs, and constructs in that chunk of space and performs all of the physics/actions/voxel manipulation within it. Because this is all processed on the server cluster, ping time in a region would be dependent on yours and their connection to the server, more so than to each other like you get in typical multiplayer games. Your typical game creates a lobby and may try to establish more of a peer-to-peer connection, which won't be possible in DU. RL geographic regions would need to be handled differently. To minimize latency you would want to distribute the cluster globally, but this can create other problems, such as keeping copies of a region in sync across datacenters. I'm not sure what the best solution is there: distribute globally and minimize player latency, while increasing some server-side latency (more predictable than user latency) for synchronization or have exactly one copy of each region and players will have to deal with additional latency dependent on how far they are from the actual datacenter hosting that region. That is a difficult problem to solve and I don't have an answer yet. Yeah, it's actually a thing. I've been in a couple of their monthly playtest sessions (30 min each month). It works shockingly well. And the battles are absolutely insane. I love it. The server tech will likely prove somewhat similar to DUs in the end. To a degree, yes. But I am a Microsoft / .NET developer. So it is what I'm used to, and actually quick clear. Linux docs make me crazy. But yes, there is a lot of marketing in there. That was likely the trade-off Illyriad Games made with Microsoft. They got early access to Service Fabric and a lot of direct assistance from Microsoft in the architectural design/development. In exchange, they share some cross-marketing. Illyriad has actually made a lot of code contributions to the .Net Core codebase, making huge leaps in performance such that Asp.Net Core can actually surpass Node.js now in raw throughput. I'm impressed by them.
  23. The fact of the matter is that no one has answered my actual question yet. I'm looking for a source. Proof. A concrete communication from NQ. Everyone is saying "it is known," but no one has provided a link to the source yet. I have been looking and cannot find it. No one else seems to be able yet either. I am only being told "oh, I read it somewhere, I'm sure of it." The only logical conclusion I can make thus far is that it hasn't been communicated, but is only hearsay. Hearsay doesn't hold up in court, and it doesn't for me either. All I'd like is a reference to a public communication from NQ, stating it directly.
  24. Did I do derp math again? I always mess up some mundane detail. (I hope at least one of you gets that reference). The actual math doesn't matter anyways. The point is: you have a layer for ingress traffic. Service fabric doesn't handle things like inter-cell connectivity or geographic dispersion. The is something you'd have to write yourself. Service fabric is a framework / engine for clustering, and service partition node and failover management. You'd handle regions by making each region an "actor" or "service" for example and distribute those services across available nodes. You're responsible for spatial divisions (an octree is just a highly efficient way of indexing spatial data, and many games use them). Age of Ascent (a space game supporting 50,000+ players in a single twitch combat battle) has a lot of public available architecture information, which I based a lot of my hypothesis on. It's pretty well documented too. http://web.ageofascent.com/blog/
×
×
  • Create New...