Warning Cookies are used on this site to provide the best user experience. If you continue, we assume that you agree to receive cookies from this site. OK

Which violin sounds better slow?



Which violin sounds better fast?





A look at music software

Are Ai Virtual instruments the future?

A hot topic now and one we are sure will remain so for many years is Ai. What Ai actually means past the marketing buzz word is currently unclear but what it represents is becoming clearer and a lot of the focus is around humans, art and the need to create.

From our perspective on a purely human artistic level (ignoring the very real struggles it will pose for those whose livelihoods depend upon creating music and audio for commercial reasons) very little will change, most of us (ourselves here included!) will lose money making music and yet if we lived an infinite number of years we would be content to lose an infinite amount of money, if Ai could make "better" music than you there would be no affect on your desire to create, it's a way to communicate the self and having something else do that for you would be both pointless and never succeeded, it's not just the result it's the process.

Having said this Ai will become part of our world and so with that in mind where do we want to shape it's path. For us text prompt to audio falls into the above futile endeavour but Ai to enhance and humanise virtual instruments could be really exciting, this could build upon current sample based instruments with offline rendering providing better transitions and better matching of intent to result, it could expand up current sample modeling which often provides better playability at the expense of audio quality.

What does this mean for music computer technology, well if we do end up going down the fewer sample route then there will be less need for RAM and storage but the processing demand will be much higher, you are likely to be looking at high end graphics card for the Ai component of the instrument.

Where are we at today (June 2025)

We've put a few audio samples together, these were quick demos, not quantized and midi performed on a midi keyboard through the virtual instrument, no extra editing was taken to make the most realistic possible sound, this was a test about creating quickly. Let us know which you prefer!
In the Video above you'll find a track that was created with a mixture of sample based and ai virtual instruments. We were using ACE Studio for the Ai Violin and it's impressive but currently a little limited, it's not able to take anything other than note and note length midi information and as such it has to guess at what you were looking for with other expressions.

In the future here is what we would like to see.

Processing has to be offline and within the DAW, at the moment you take your midi to their own midi editor and render it there and then import the audio back into the DAW, the rendering is quick but it's done online. You need to be able to control expression on the instrument, it did a surprisingly good job in the testing but for quick passages it seemed to want to go too Forte and slower passages were Pianissimo, there was no way to get it crescendo or diminuendo. I think ideally this Ai rendering will work with a sample or modelling based virtual instruments rather than vs it, there needs to be a good playable element to it to get the ideas and performance down, some of the big virtual instrument libraries could well be looking to train the Ai on their existing sample content for very exciting addons to existing libraries (that almost certainly won't be free!)