Jump to content

Search the Community

Showing results for tags 'developer'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Starting Zone (EN)
    • Rules & Announcements
    • General Discussions
    • Off Topic Discussions
  • Starting Zone (DE)
    • Regeln und Ankündigungen
    • Novark's Registratur
    • Allgemeine Diskussionen
  • Starting Zone (FR)
    • Règles et Annonces
    • Registres du Novark
    • Discussions générales
  • Beta Discussion
    • Beta Updates & Announcements
    • Idea Box
    • The Gameplay Mechanics Assembly
    • Streamer's Corner
    • The Builder's Corner
    • Innovation Station
    • DevBlog Feedback
  • Organizations
    • Org Updates & Announcements
    • Novark's Registry
  • Fan Art, Fan-Fic & Roleplay
    • Novark Agora
    • Novark Story Time
    • Novark Art Gallery

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location:


Interests


backer_title


Alpha


community_id

Found 2 results

  1. As a future developer in Dual Universe, the idea of DACs as payment seems like an interesting prospect. But the more I think about it, the more it seems like a bit of a d-bag move on the part of the developers. I don't want this to be an overtly negative post, but bear with me.. I intend to be creating a whole lot of stuff in game, ships, defenses, tools, etc. with full Lua scripting to back them all up. These things will take a TON of time to develop if you hope to have a decent result from them. However, with the Dual Access Coupons being the only *real* form of payment I can hope to receive in game, it kind of feels like a bit of a face slap. Think about it. I spend 25 hours working on a super awesome engine control module with advanced auto-maneuvering for someone who requested it custom. Joe Space Captain goes to the Dual website and buys himself two DACs to give to me as payment for the work I did. Great, I got this month and next in game for free as payment. = 30 Euro savings or whatever for me. Now, instead of a custom job, I make a super awesome base turret blueprint that I sell on open market. Let's say 250 people buy it at 500 credits each, and I trade those 125,000 credits for 5 DACs that people are selling on the market. What happened here? I now have 5 free months in game, which I may or may not ever use, but NovaQuark just made 75 euro in real money cash for all of that work that I did. Once I start collecting DACs like candy because now 10,000 users have realized how good it is and have purchased my super-awesome turret, NQ is rolling in the money for work I did, and those DACs I accrued have become virtually useless to me. Obviously, I made assumptions the monthly fee is 12 euro and DAC are 15 euro. Actual cost doesn't really matter. The more I make and sell on market, the more trivial the DACs will become. Meanwhile, NQ makes tons of constant value cash on my work. My alternative would be to just hoard the in-game currency, which has equally nil real-world value. I have to imagine that with this system of payments, serious developers and builders will eventually just start dealing outside of the game and make contracts to do in-game work with payments in real cash. That's the only way it would actually be worth it to build things over the long term. All of the really good stuff in game will only be accessible through various user-run black markets outside of game, with in-game market only used for trivial purchases just to get items transferred and to earn the in-game money needed for resource accumulation. I could be wrong, but that's what I foresee happening. What are your thoughts?
  2. I'm a developer, you're a developer, she's probably a developer too! Let's face it, DU is born to attract the developer types to it. So let's start a (serious) wild speculation thread about what we think the back-end server architecture might look like! I have to imagine that since NQ is targeting a single-shard server structure the back-end will probably need to be broken up into micro-services. But since it is a real time game, it will also need to be as compact as possible with as few network boundaries to cross as possible. A server cluster, perhaps similar to Azure Service Fabric would do the job very well, with services partitioned out and distributed across nodes for density and automatic failover and scaling. First off, there will be a single point of contact for establishing and authenticating connections to the servers. The authentication service will check credentials and negotiation the session security keys and reserve a socket connection endpoint on the gateway layer for the client. The client will then connect to the specified endpoint with provided symmetric keys to prevent tampering. The gateway layer will be scaled as necessary to provide raw network throughput. At 5 KBps bandwidth per client, that means we get a theoretical maximum of roughly 200 clients per server until the gigabit network ports are fully saturated. Incoming packets (messages) are dropped into an event queue system for processing by the back end processing services. The regions are partitioned spatially using an octree algorithm (essentially a 3-dimensional binary search, where every cubic region of space is split in half on each dimension, into eight cubes, as necessary). The regions are separate services and are spread amongst server nodes for density and scalability. Each region is responsible for calculating physics on region objects and routing events between players in the region. A level of detail system is also in place for sending lower frequency important message across regions. As more players move into a region, the region is split cubically into sub-regions that redistribute amongst nodes in the cluster. As players leave the regions, they are collapsed back into the parent region to conserve resources. When the server cluster hits specified usage limits, additional nodes are added/removed from the cluster to scale up/down as needed. The service cluster framework (for example, Azure Service Fabric) is responsible for redistribution of partitions across server nodes and replication of services for failover purposes. Each of these scale units are also distributed geographically to maintain low latency, with a backplane in place to keep geographic regions in sync. That's my initial spitball idea for the architecture. Feel free to elaborate, correct, or share your own architecture ideas. Let's get a conversation going.
×
×
  • Create New...