Jump to content

Whalekit

Member
  • Content Count

    10
  • Joined

  • Last visited

About Whalekit

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Automated mining creates a feedback loop of "mine resources -> build miner robots (using factories, that will be in game) -> mine more resources -> build more miner bots -> mine more resources...", making someone/someorg (who start doing it first) super-reach and powerfull, or making resources dirt cheap and automated mining doesn't give you a lot of cool capabilities, other that just mining. At the same time automated building does not create a feedback loop, because you can't build more builders without the resources. And automated building of at least voxel structures (without putting the elements) controlled from Lua will allow you to do the following cool things: robot that build procedurally generated mazes (yeah, I already named it earlier in this topic), robot that automatically repairs hull of your ship/wall of your town, robot that lays voxels right in front of enemy ship's guns, robot that obstructs internals of enemy ship with walls and all sorts of crazy stuff someone will think of. And when talking about lost jobs for newbies, let's not forget about the fact, that there already will be factories in the game. About the jobs: wha are the newbie jobs other than mining and building? Let's see: trucker and taxi driver (land and space versions), crew member on a big ship (gunner for example), infantryman, maintainer of a automated things (remember the fact that Lua-controlled things will only work around player?) who also guards (or at least pushes alarm button in case of danger) and probably operates them a bit, geological exploration (walk around in unexplored regions and look for deposits with scanning equipment).
  2. Secret space dwelling inside big resource-poor asteroid in resource-poor area (so that nobody would want to mine it, finding my base).
  3. I like the idea, but I think rather than individual game mechanic it should be something that is implementable using Lua and in-game Lua-controllble elements. So, you would have Lua software you would use to create model (which would be just Lua variable), and then builder robot would build it. Another application for that from the top of my head: with bulding machines that can be controlled from Lua you can create a robot, that builds, for example, procedurally generated mazes! And good thing about it is that it's not gonna ruin economy like automated mining would.
  4. I wasn't talking about midi keyboards. Why are you bringing them up? I was talking about sound cards for computers, that allowed them to play sound effects and tracker music (which was created without midi keyboards). I think sound card/SPU is better name for what I'm suggesting here than synthetyzer, because when you say "synthesizer" people usually think about something, that is connected to keyboard (including embeded one) and you which should play on and SPU is a thing that controlled programmatically, with commands sent from the code. With sound device, system clock and built-in functions (not even standart library needed), Lua can do that, no problem. And if you think there is some problem I'm missing, that won't allow me to create jukebox with sound device and Lua, please elaborate, what is exactly the limitation you're talking about. But you right about the fact that with "sound emeded in html5 screen" approach you can't do much through Lua scripting. I do not. I assume low tier sound devices with little to no sound memory will be used by about all lua scripters to add sound effects in theyr applications and hight tier (with more channels and sound memory) will be used by composers to create music. And also everyone else will be using them, not as a developers, but as a user of applications of Lua scripters and listeners of music. Since they are not likely to change anything after buying (building from a blueprint) a copy of thing made by scripter/composer, you can employ copy-on-write opimisation - so all audio devices that are copies of one, and have same audio memory content will have it stored in the shared physical space on the server (for example, if you've got 1000 (or any number) 1MB audio devices with same audio memory content (because they were created using the same blueprint and wasn't changed after creation), they all take just 1 MB space on NQ servers). If you are serious about storage size - look... Say, each player has created (let's crank it up all the way to eleven) 1000 sound devices with 1MB of sound memory in it and each is unique (so you can't do copy-on-write optimisation i described above). Let's say monthly subscription fee is 10$. So, you've got 1 GB per player. Now, lets see here, here and here. As you can see, storing data have a price of 2.6-1.8 cents per GB per month in frequent-acces/hot storage and 0.7-0.2 cents for rarely-access/cold storage. So, if a player somehow uses all his 1000 sound devices on a regular basis, storing their audio data will cost 2.6 cents at worst and if player puts them in the container and never uses again, you can move their audio data to cold storage, making cost of keeping them 0.7 cents at worst (and if you also compress it with FLAC, you will get at least 20% compression, making it 0.6 cents). 3 cents from 10$. As you can see, game with monthly subscription fee can afford storing a bit of audio data easily. Storage space is cheap, that is why you can upload pictures on this forum, that is why dropbox gives you 2 GB of storage for free for indefinite amount of time. And that was a worst-case scenaario - in reality players won't be creating 1000 of those things each, because that is waste of resources and money since not every player is scripter/composer. And most of sound devices will be copies, so you can do copy-on-write optimization. Also, old ones are likely to be reused, instead of piling up, but if they are not, you can make it so theyr memory gets wiped (freeing space os servers) after few month without using them or something, solving the issue of audio data piling up with time. So, storage space is not a problem. There is no reasons to reinvent the wheel when it comes to ship building - You can make 3d models of you construct offline and import them into the game and have special device print it. Also, There is no reasons to reinvent the wheel when it comes to scripting - you can put script on your web server and have it connect to NQ servers and use NQ web API co control in-game constructs. So you don't need to store Lua code on NQ servers and no need to run theese scripts on game clients. The more of the game is outside of the game the better! /S Could you link me to the official source that states that this is how it's done in the game, please? Or it just your guess? You do know you can also play CANCER noises with HTML5 based player, right? Using the same reasoning one could also say that there shouldn't be voice chat - someone could transmit annoying sound through it. But, as I already said, this is not a problem in any of the cases, becase you can give player an option to mute person who is making annoying noises through the VoIP, mute device that has HTML element that plays annuying sound and mute audio device that is making annoying sound. sorry what? VoIP is a thing you use to talk to players, and sound devices I'm suggesting to add is a thing you can use to create sound effects to Lua applications and in-game music creation&playing. Your idea seems to be: in order to play a sound you need to call screen.setRawHTML/setContent, and pass the something like <audio src="URL/of/sound.ogg" autoplay=true> </audio> as "HTML" argument. That gives you ability to start playing sound. You can also probably stop it by calling screen.deleteContent(id of this html elelment) (see lua tutorial video for screen functions). But it won't let you pause and resume sound, rewind or change it's volume from Lua, you can't get notified when it sound ends, you can't even be sure it's playing - for some players it might be loading while it's already playinig for other players because each game client loads it by itself from the internet. If you want to play several sounds consecutevly (i.e. multiple songs from playlist), you could start first sound, and then stop it and to start next after (duration of first sound) seconds, BUT because first sound might have started later because of it took few seconds to load after you put "audio" html element on the screen, it should have ended later - but your program don't know that, so it stops sound abruptly aerlier than it should have ended. So, you can have jukebox, but it has problems. In summary: you can start sound, but you can't conltol it's playback, you have no information about playback, and your ability to parametrize it limited to it's URL and this lack. Also, it exists completely outside of the game, somewhere in the internet. Your suggested approach is better than nothing, but is very limited. My approach, in the other hand, gives programmer a lot of control over sound right in the game - for example, I think you could easily create even primitive TTS with sound device, by populating it's audio memory with phones and and playing them consecutively. And task of creating audio player with playlist/compositions queue aka jukebox is trivial.
  5. It was inspired more by old sound cards, like this one, but eah, you can call it a synthesizer. Actually, what I was saying is that you can build jukebox with this thing and some control unit. It's enought to send commands to the devices - that what is important. You could also use it to populate sound memory with procedurally generated sound, though that would requre doing it in many little steps using system clock (one continuous script that would try to put thousands of values would run out of time). Sound editing should not be off-the book - it's essential. And audiotrolling can also be used through the voice chat and html5 music - and just as easily solved - by muting the source using game menu. The entire point of having audio memory is to be able to use your own samples and not predetermined ones, which expands creator's possibilities greately. this synthesizer is simple and requires very little CPU (less than 1% of my 8-years old CPU). Now, about memory: In my example I said that audio device has 44100*8 bytes. That is 345 kilobytes. For the reference - 2 images in first post have size of 271 KB, and are on the NQ servers. Dropbox free account allows you to to store up to 2 GB of data on it's servers. In the game, to create sound device you will need resources and not every player wil create unique sound devices - most of them buy/create using blueprint a copy from a musician and never edit it, so you can do copy-on-write optimization (have 2 or more sound devices share the same physical server memory until one of them changed). But even if each player created 100 unique sound devices (and the wouldn't, because that's just waste of resources) with 400 kylobytes of audio memory it would be just 40MB per player. I think game with monthly subscription fee can afford to store 40MB of data per player. But even if players will be creating sound devices uncotrollably, like 100 unique sound devices per month, taking up more and more server space - NQ can make it so, that after some time without energy/maintaining memory of sound device is wiped and thus server space is freed. (But that is neither going to happen, speaking realistically) So, as you can see, server space is not a concern here. In fact, one of the main reasons I suggested this concept is because it's so compact - you only store instrument's samples and sequencing data for music. Having said all that I now think players can have sound devices with entire 2 MB of audio memory in them - but for higher resource cost ofc - that would be just enought for almost any track. Is there official confirmation that you will be able to embed frames and data from The Internet into the in-game displays? That sound interesting, but I never heard of that. BTW, do I understand correctly, that with HTML5 you can only cover playing single static music track. Not even editing playlists. What about sound effects for your Lua application or game? My suggested thing covers not just the case of playing static music - it also allows you to create Lua game or program with sound effects, jukeboxes where players can queue tracks they want, etc. It gives you so much control, that you can even make dynamic music (like you can hear in some game soundtracks). And all that right in the game, programable with Lua.
  6. I am not talking about using HTML (or any other web technology) to play audio in any way. I don't know what gave you idea that I do. I am suggesting to add an element - sound device - that can be connected to control unit, just like some engine, or door, or lamp or etc. And just like other elements it has actions - in it's case they are not to turn light on and off, but to read/write it internal audio memory, play sounds on it's channels, etc. By populating sound memory of sound device with samples and then consecutively calling actions (probably using system timer event) to change pitch, sound source (sample from auio memory or oscillator), filters of a channels and play theese samples on them it you can play music. So, as you can see, no need for html5 or web audio api or anything like that to play music. This does not require any change in Lua, besides exposing sound device's "actions"/functions into it, just like actions of ligh (such as activate, deactivate, toggle) are exposed. It's just new element with actions, which can be called from Lua, that is capable of playing sound . (light.activate() switches on the light, soundDevice.channel_start_note(1) plays sound (which you previously set channel 1 at).) Nothing that is undoable with Lua, after having just this element (sound device) and it's actions available from Lua script of control unit. Have I explained my idea clearly now? I really tried, maybe too hard, but I don't know what is eactly not clear in my previous messages, so I tried to clarify everything.
  7. In the first post here I described sound device and it's API allows you to create music entirely in game, with just it and some scripting. Using it's API you can create audio player program, with tracks, playlists, and all of that. No need for any external databases or servers - all just using sound device (for playing music) and display (for audio player GUI) connected to Distibuted Processing Unit with right Lua script in it.
  8. Why do you think so? All you need for that is Lua code in the jukebox.
  9. I'm sure they will ad some way to play sound. But will it be note blocks, that can play one predetermined sound, midi player with predermined instrument sounds, or powerful API that allows you to create and play something like tracker music or even better right in the game? It's not about my impatience, I just don't want game to end up with poor soud API (like music block from minecraft or midi player with predermined sounds), and seeing how rich player's capabilities are to create visual content (svg, html) gave me hope that devs could consider making something just as good for audio.
  10. Suggestion to add sound devices In DU you can present visuals using html, svg, widgets, but when it comes to audio, things are not so bright. Sound is important if you want players to be able to make games. Sound can play role in interfaces, used for alarms and, of cource, playing music! So I propose adding to the game lua-controlled "SPUs"/"sound devices"/"sound units", that will allow players to do all kinds of stuff - from simply playing sound on notification in theyr programs to implementing sound effects in the game, or even creating sound trackers and sequencers (to then create music in it). sound device: -has sound memory consisting of, say 44100*8 bytes (number depends on sound device tier), which that can be read as integers using get_sample, set_sample. -has 8 channels (also depends on sound device tier). each channel can be set to play samples from area in audio memory or use oscillator. Each channel can have 1 siimple filter on it. sound device api: sound memory manipulation functions: samples_count(sample_depth) - returns how much integer values of size sample_depth in bytes sound memory can hold get_sample(sample_depth, sample_index) - interpreting sound memory as array of signed integer numbers consisting of sample_depth bytes return sample_index'th integer number from this array. sample_depth can have value 1, 2, 4, 8. On any other value get_sample returns nil sample_index wraps around if higher than number of integers in array set_sample(sample_depth, sample_index, new_value) - interpreting sound memory as array of signed integer numbers consisting of sample_depth bytes set sample_index'th integer number to new_value. getNSamples(sample_depth, sample_index, N) - same as get_sample, but instead of getting 1 integer value it returns table consisting of N values from audio memory starting from position sample_index. Can be merged with get_sample setNSamples(sample_depth, sample_index, new_value) - same as set_sample, but instead of setting 1 integer value, it sets #new_value samples to values from new_value table starting at position sample_index channel control fuctions: channels_count() -returns number of channels sound device has channel_set_sound_source_memory(channel_num, sample_depth, sample_rate, start, end, loop) -set channel to play sound from sound memory channel_set_sound_source_osc(channel_num, type, frequency, osc_param) -type is string - "noise", "sin", "tri", "square". triangular and rectangular waveforms take 1 more paramenter for rate channel_set_volume(channel_num, new_volume, time, delay) - sets channel volume to new_volume. If time arg is provided, volume will be changed gradually in time milliseonds (if not interrupted by another set_volume command). If delay arg is provided, volume change start will start in delay milliseconds after this command called. channel_set_pan(channel_num, new_pan, time, delay) - sets channel pan to new_pan. 0 is left, 1 is rigth, 0.5 is center channel_set_pitch(channel_num, new_pitch, time, delay) channel_start_note(channel_num, delay) - starts note on channel. start is delayed for delay milliseonds, if delay parameter provided. If other note was playing on this channel, it ends. channel_end_note(channel_num, delay) channel_set_filter_type(channel_num, type, delay) - filter types are "none", "highpass", "lowpass", "comb", "bandpass" channel_set_filter_base_frequency(channel_num, freq, time, delay) channel_set_filter_gain(channel_num, gain, time, delay) - in dB, if applicable to filter channel_set_filter_param(channel_num, param_num, param_amount, time, delay) - filter-specific params, such as resonance for "highpass" or "lowpass", bandwidth for "bandpass"
×
×
  • Create New...