top of page

Search Results

5 items found for ""

  • Audio Middleware: Why would I want it in my game?

    On our current day and age of game development, almost all AAA game studios use audio middleware and so does a good amount of indie game developers. However, a lot of indie developers wonder what is it, where does it come from and why do they need it as a part of their game? What will you find in this blog: - What & Why? - How can #middleware improve my game? -> FMOD Space Racer - Demo Video -> Wwise Space Racer - Demo Video - Multiplying & Diversifying - Footsteps -> Wwise Footsteps - Demo Video - Making the Ambience More Realistic -> FMOD Ambience - Demo Video - Complex and Long Dialog Script -> FMOD Dialog - Demo Video - Other Important Factors - And what about Adaptive Music? - > FMOD Adaptive Music - Demo Video - Quick note on VR/AR/MR and 360 audio - Conclusion What & Why? Some will say: 1."Unity has a really good audio tool and I don't need more than that," or 2."I can make all that by scripting it for you," or 3."It's too expensive and I don't exactly know how much I'll pay to have it in my game," or 4."It's very complicated and my programmers and designers will not want to learn it!" Well, first, lets demystify some of that: 1. Game Engines have good and simple audio tools that are alright for certain games, but they are not enough anymore for the level of audio quality and implementation that most games require, whether they are Indie or #AAA. 2. Yes, programmers can definitely create a whole new middleware application specific for your game. However, your sound designer will have to learn how to use it, deal with bugs and it will be a back and forth between them and the programmers, which takes time, something most of us cannot afford. The amount of time spent is not worth it, when your sound designer/composer can use a tool they are already comfortable with and that will get the job done giving the programmers only simple parameters and a few scripts that will make the audio run exactly as planned. 3. Yes, you do pay for the usage of middleware if your game is very successful, but only after profit (I'll be breaking down the prices below). In short, if your game needs to pay for Middleware, it is because you made enough money to pay for it without making a big hole in your budget. 4. As for being complicated, your programmers will not have to learn how to use it. The #SoundDesigner / #Composer job is to make the implementation process through #Middleware easier for the programmer, who in some cases will only have to write a few audio scripts to get the middleware connected to the game engine and the rest was already done inside the middleware. So, through this article, I want to show you why should you consider Audio Middleware for your game and how it can improve the usability of audio and make transitions very smooth. To start, I'd like to point out the most used Audio Middleware on the Market: And below you can see a simple comparison between them: All #Middleware have their pros and cons, however, for the purpose of this article, I am going to talk more specifically about #FMOD and #Wwise. #Fabric is not as used nowadays and I personally have no experience with #Criware, which is mostly used by big studios Game companies in Asia, such as #Capcom. And as for #Elias, I wrote two blogs about the software, which I really enjoy, however, as I want to focus on both sound design and adaptive music on this blog, #Elias would only fit one side and you can check my other blogs "From Linear to Adaptive - How a Film Composer Fell in Love with Adaptive Music" and "From Linear to Adaptive - A Deeper look into Elias Studio 3 MIDI Capabilities" to find more about this amazing tool. How can #middleware improve my game? There are many ways that middleware can improve your game. The first one is when it comes to creating organic sounds, for example a vehicle engine. On the example below, I created a space racer engine using FMOD and Wwise. In FMOD, you can see I used 2 different parameters, RPM and Load. RPM will show the speed of the racer as it changes gears (I know, why would a space vehicle change gears? Because it sounds cool!), and Load tells if the vehicle is accelerating or decelerating. Disclaimer Note: On the videos below, you will not hear me talking. These are just simple sessions to demonstrate what you can accomplish so easily and accurately on middleware. The sessions were created specifically for this blog, so there's not many complicated features being used as they would make it harder to understand the point. However, Middleware sessions tend to contain all of their audio elements together and use a lot of events, tracks and plugins to achieve their goal. While in FMOD you create the audio tracks still on a linear-format thought-process (although it is still non-linear), in Wwise, there is a lot more freedom when choosing how an audio file will react to different switches and states. Multiplying & Diversifying - Footsteps Another way that audio middleware can improve your game is to improve loops and sounds that need to be repeated, so they never sound exactly the same and your player never gets tired of them. Some examples relate to footsteps, ambiences, and a way to randomize vocal sounds and dialog. Below you can see an example from a Wwise session where I created footsteps sounds on three different materials. There are only four footsteps sounds for material, but because of randomization of pitch, order and effects, they sound like there are thirty footsteps recorded, while you save space on your data budget and store only four sounds of each. This can also be accomplished with only two or even one sound of each. Making the Ambience More Realistic Ambiences can be tricky, mostly because players tend to spend a good amount of time in a single location. So, to avoid a simple loop of a one-minute countryside morning and hearing the same birds over and over, we turn to middleware. By using them, you can have multiple base ambiences of a couple minutes and add multiple insects, birds and other nature sounds in a random order. Doing that, you are able to create a much more realistic-sounding location. And moreover, you can use middleware to smoothly change the time of day and transition between locations with a correct fade curve. Below you can hear an example of how I created a countryside ambience and make it go from day to night and from no-rain to heavy rain with thunders. Notice that the birds in the morning do not repeat often and their tracks play randomly, making it harder for the player to identify a looping-point. The thunders follow the same suit, never playing the same track the same way and always playing on random moments. Complex and Long Dialog Script All middleware are better implementers than the game engines when it comes to dialog. If you think about the fact that you might have twenty to fifty characters, each with over ten lines in twenty different languages and you still need all of the audio to play the correct way and perfect timing, with the same effects. With middleware you can organize them all correctly and make sure they have the same effects. Another important use of middleware is if you utilize multiple takes, or lines that can be used for the same goal, you can group them and either call them randomly, or choose an order. Check the very simple dialog demo on FMOD below. Other Important Factors Lets say you have a character that goes inside a cave. The reverberation within that cave should completely change the character's dialog, the sound effects and the ambience. However, unless you want to, usually the music should stay the same. In a game engine, you would probably need to program the reverb and delay to react on every single one of those tracks. However, in Middleware, you only have to create a state in which all of the sounds you choose will change unnoticed. With Middleware, you can change the sounds on the go and they will automatically change within the game, without any issues in a matter of minutes. FMOD and Wwise have capabilities of connection with other software and external plugins that allow audio artists to create exactly the sound they, together with the game developers envision. And what about Adaptive Music? I talked extensively about Adaptive music on my blogs about Elias, which can check out through these links, if you have not yet read them: - From Linear to Adaptive - How a Film Composer Fell in Love with Adaptive Music - From Linear to Adaptive - A Deeper look into Elias Studio 3 MIDI Capabilities Music in Films and TV Series are cut to fit the scenes and to hit certain points organically. However, in a game, it is very hard to transition a track into the perfect hit point without making it fully adaptive. And to do that, you MUST use audio middleware. In the demo below I created two moments: Scene 1 and Battle. Scene one has no extra layers, but one of its tracks randomly chooses synths that complements its top line. It also has an intro that will only play once and go into the Scene cue, which will loop until a call for a change. As for the Battle cue, it has both an intro and an outro. It also has four layers. To control those layers, I created a parameter called Battle Level from 1 to 4. Lastly, the MusicState parameter controls the moment where the player is located. If the player is on Scene 1, the Music State is at 1, however, if the player is in combat, the Music State is at 2, triggering the Battle Sequence. The battle levels can be used in multiple ways, such as the danger the player is in, or how close to the end of the battle the player is in. Quick note on VR/AR/MR and 360 audio When it comes to VR/AR/MR, middleware is also on it. There are plugins constantly being developed that can fulfill the spatial audio needs within games through middleware. The most commonly used at the time of this blog writing is the Google VR plugin, which allows you to correctly place sounds within the field for VR. Conclusion Audio Middleware is a tool that can improve your game immensely, whether it is for a AAA game or a simple Tetris indie remake. And besides all the points I made above, Middleware can push audio data budget down by a lot. Both FMOD and Wwise have the capability of compress-exporting it accordingly to fit that budget, by choosing the sound format, its bit rate and sample rate. Moreover, through middleware you can add effects and randomization to all kinds of sounds, without the need of various recordings of the same sound, which in return also brings down the audio data budget. Thus, dear game designer and developer friend, I hope this can shine some light on your understanding why your sound designer or composer is asking you to use one of these in your game. They see the value of it and understand the importance of a great soundtrack to make your game sound perfect. If you have any questions, I would be more than glad to answer them. I am very well versed in both sound design and music and I have full knowledge of FMOD and Wwise. You can contact me via email to contact@theonogueira.com

  • Sennheiser Ambeo Smart Review

    SCORE: 90/100 In preparation for the VR Oslo Event, which will happen on December 6th, 2018, Sennheiser let me test their new Ambeo Smart Headset: a 3D (360°) audio recording and playback headset, which impressed me in many levels. Below you can check out the full video of my tests and comparisons. Another test I conducted was by posting the videos on facebook, but the audio did not play as binaural, but as mono. So, my last test was making a live video. As I went to a Christmas lighting event in Lillestrøm, I took the opportunity to make a live video with the Ambeo Smart as the microphone capturing the audio. And the result was: Facebook Live allows for Binaural audio. Check the video below (use headphones): My blog is now allowing a Portrait video to play correctly, but the audio plays correctly. If you wish to see the video better, follow the link below: https://www.facebook.com/theonogueiram/videos/10217452309957548/ My Pros and Cons: Pros: - 360 Audio Playback & Recording are of very good quality - Active Listening feature works even with music on (You will never have to take one earbud off to hear outside again!) - Very comfortable fit (it takes a while to get used to how to put it on, though) - Customizable EQ parameter under the Apogee/Sennheiser software - Great Noise Cancelling feature - Works on recording 360 on most video/audio apps Cons: - It works only on iOS devices as of now, but I was told they are planning on expanding - It only has a lightning connector plug Other: - When having a video call or watching a video on certain apps, like Whatsapp or Facebook messenger, the audio goes to mono. However, this is a problem with the apps and not the headset. (It works wonderfully with apple apps, such as Facetime) My score is 90 out of 100! This headset is definitely worth having, whether you want to work with 360 audio, be able to listen to music while walking, running outside, or to be able to watch Films, shows, or play games that use 3D audio

  • Gullfiskteorien: How I took my approach to scoring my first VR Film & the creator’s thoughts behind

    When I was approached by the Sound Designer and Creator of the new VR film “Gullfiskteorien” (The Goldfish Theory), Thomas Pape, I was intrigued and excited to score something different. I have worked on two VR Escape Rooms, last year, for the company Sandbox VR. Deadwood Mansion is a multiplayer Zombie house where you and your friends have to fight your way to survival. It has become very popular alongside the Curse of Davy Jones, by the same company, with the same gameplay style, but on a pirate story setting. Although Thomas Pape’s project was a film, it contained certain immersive elements to it, in the shape of decisions by the viewer, like most role-playing games have. The story is based on an experiment by two scientists around a teleportation theory based on a goldfish. The viewer takes center-stage as the goldfish swimming in its bowl, watching the story develop and as the most important part of the teleport machine. I believe the music, alongside with the sound design and where he positioned them on the virtual environment mix became very important aspects of the story, as it would unconsciously make the viewer look to the direction of the sound. So, I decided to interview the creator and sound designer of the film, as a way of showing more people how VR can be used to create a new genre of films that could not only be enjoyed on a VR headset and headphones, but possibly in new 360 degree movie theaters that could benefit from content. Hi Thomas! It’s great to have you and thank you for taking the time! I was wondering, what was your first interaction with VR? I was introduced to VR by a composer having a workshop on music for games, at the Norwegian Film School, and I told him that I was very intrigued by games like “The Stanley Parable” where you have a strong non-linear narrative and he showed me “Accounting”. I was blown away by the humor and crazy worlds! Was that interaction an inspiration to create “Guldfiskteorien”? Sure, in a way I guess it was! I remember at first I was playing a lot of different games. And I was completely spellbound! Shooting zombies, exploring strange planets, drawing in thin air, but I wanted to travel to a place that wasn’t CGI. I was hunting for movies, but had a hard time finding something interesting that wasn’t just pretty 360 landscapes or roller coasters. I was wondering why there weren't more cinematic stories told in live action stereoscopic 360? So I started experimenting with filming a lot of 360 footage, and found it both exciting and challenging! Coming from the world of cinema, I found that I had to re-learn a lot of my skills to apply them to VR, simply because it is a completely different medium. I think this might be why so few people are doing live action VR, because you might THINK it’s the same as shooting movies. It’s just a 360 degree movie, right? But it isn’t. And I think a lot of people might have been discouraged by that. Sure a lot of the tools and tricks of the trade are the same when shooting “flat” movies and 360, but it’s like comparing stage acting to screen acting. When you watch a flat film, you watch the movie from the outside. When you watch a VR movie, you are IN the movie! And that makes all the difference. I think the inspiration from that first encounter with “Accounting” kept bringing me back to the idea of that small world where you are the unspeaking centerpiece of the story. That’s a very interesting point! Now, can you tell me, how did you come by the idea of using a goldfish in a bowl as the camera for the viewers? I don’t remember if it was the fisheye lenses of the camera, or if it was just the feeling I got when watching my own early material that spawned the goldfish idea. But when I was searching for a plausible and comical main character, who couldn’t speak, couldn’t move around, but only look around and observe, the goldfish in a bowl was obvious to me. I am also a big fan of great thinkers like Einstein and Stephen Hawking, and the fact that they both like to use the goldfish for different explanations, made it obvious to me, that if I was going to brew up a fake scientific theory about time and space travel, this was the way to go! The idea that the scientist in the story relate to the goldfish called Bent directly throughout their travels, gave me the key to let the audience be part of the story, even though they can’t talk back. At different point Bent is talked to as a colleague, sometimes a pet, or an enemy, but he always plays a central role in the scene. I think that when telling stories in VR is very important that you have a reason to be in the world. It can be small like in “Pearl” where you always sit in the passenger seat of the car that literally drives the story. Or big like in “Accounting” where you are the epicenter of the story. The problem with live action VR is, of course, that the film is already shot, and you can’t interact directly with the world you are in. But I wanted to give the audience just a bit of control, so they don’t just observe. So inspired by the story of the mice in “The Hitchhiker's Guide to the Galaxy” (who does experiments on the humans, though they let them think its the other way around) I gave the goldfish the power to choose between timelines. So at certain points in the story everything freezes, and you get to choose how to proceed. This way you become somewhat of an interactive viewer. I remember you telling me the story of how hard it was to shoot in 360 degrees when you have to be hidden and can’t really see the actor’s performance well. Can you tell us about other challenges you faced shooting the film? Hehe, the crew and especially the cast have certainly been tested to the limits when we shot this movie! When I started recruiting people I told them that this was going to be an experiment, and that we would have to invent how we should work as we went along. Luckily everyone was completely onboard, and really exceeded my expectations in numerous ways. I had to write the script with both choreography and timing in mind since the scenes are shot as one-takes with no editing. Then I worked with the actors to move them around the 360 space in a way that gives the audience a cinematic viewing. For example when I wanted a close-up, the actor had to move closer to the camera, and if I wanted a two shot they had to stand relatively close to each other. If I wanted something that comes close to a standard cinematic shot- reverse-shot, they had to be at an angle that forces the audience to turn their head to look at them individually. The set designers build an entire laboratory in the studio from very specific measurements relating to the fact that the actors couldn’t get too close to the camera because it causes stitching problems, and they can’t get too far away, because then the limited resolution in the camera would make us lose contact with them. It was a bit of a trial and error, were we had to first act out the script with tape markings on the floor, and then when we had the walls in place, we moved them back and forth a bit to get the dimensions right. Then I had two cinematographers light our laboratory set from the outside, without having the lights visible in the shot. We had to work without a ceiling to bounce in the light, and they rigged a few black cloth contraptions to create contrast and simulate light fixtures. The roof was then added later as CGI. The outdoors scenes were especially challenging for the cast, because I wanted to show some of the more rugged and beautiful parts of nature. So we filmed on a mountain top near Jotunheimen, Norway, which turned out to be -34 degrees celsius (-29.2 degrees fahrenheit) on the day of shooting! Both the cast, crew and equipment had to have these little heating packs taped to different parts, to keep them from freezing! And then we were in a completely different scenario when we traveled to Israel to shoot the 3000 year old Oasis. The temperature there was +35 and in the middle of the desert. Unfortunately we couldn’t drive to the location, but had to hike with all our equipment for about an hour, and partly scale a crevasse with climbing gear, to get to the spot where I wanted to film. But it definitely paid off, and the 360 view is breathtaking! Creating the audio of the Goldfish theory has also been quite an experiment, since conventional methods weren’t possible. I had to use a combination of hidden lav mics on the actors, and an ambisonics microphone at the camera to capture the performance, and then when editing it, I had to place the sound sources in a virtual 3D soundscape to simulate reality. I had already written a lot of audio cues into the manuscript, and worked with the actors to use sound to draw the audience's attention, but when I started sound designing and mixing it, I realized that I had immense control over the audience's attention. I could literally predict (to a certain degree) how people would look around, based on audio exciters planted at timed intervals in the story. It felt like I was harnessing the power of an editor in conventional flat film. Now that the film was premiered, I feel like because it is a VR film, it is very hard to promote it with shots from the film and teasers, as if you change its format to be watched on a phone, or screen, it would lose the value that it has. What are your thoughts on that? I think the best way to promote it is to talk about it, because you simply have to experience it. The moment you convert it to a format for phone or make a flat teaser, you lose the magic. When you watch it in VR, the video is stereoscopic, so you see everything in 3D, and you feel like you are there. The same is true with the audio. It’s converted from an ambisonics master to binaural sound that correlates to your head movements, and fools your brain into believing that what you hear is real. And the moment both your visual and auditory senses tell you that the world you are experiencing is real, then it becomes real. People have been telling me that VR experiences shouldn’t last for more than 5-10 minutes, because people get nauseous, or over-stimulated, or stressed out, or the medium simply can’t handle stories longer than 10 minutes. When people watch the Goldfish Theory they stay in that world for 30 minutes, and still come out with a smile. Yes! That’s for sure! I remember watching the final product for the first time and it was very immersive and never really got me nauseous or sick in any way. Now, what about the future? Do you intend to work on new VR Films and dive more into this new industry? Yes, indeed! I am going to keep exploring how to tell stores in VR. I think one of the greatest challenges I’m facing is that there is no conventional funding programs for VR movies, and so far It’s been very challenging for people to create a viable income. It’s also very expensive to work with the material and equipment since most of it is still experimental. I’m so excited that people all over the world are pushing the technological limits for what can be achieved in VR, and it is the exciting new frontier of storytelling. I want to participate in the development as much as I can. Great! So, if anyone is interested in “screening” the film or adding it to their festival, how should they reach out to you? Preferably they can get in touch with me through email on tpape@plainmomentum.com or call me on +45 30 26 66 03, and I’ll be happy to talk about how we can make it happen. Alright! Thank you very much and once again, this film was a great experience for me and I’m sure it was for you as well. I hope to have more people checking it out and talking about it. VR films could become a new genre to create new original stories.

  • From Linear to Adaptive - A Deeper look into Elias Studio 3 MIDI Capabilities

    Introduction After writing my previous article about my transition to Adaptive Music and how I fell in Love with Elias Studio, I decided to go a bit deeper. I noticed there aren’t many videos and tutorials online for Elias, especially on the MIDI features, which are brand new. I believe Elias is a game-changer when it comes to composing for games. Elias is being adopted by a growing number of game developers on a growing number of platforms.  EA’s marquee title, A Way Out, for XBOX, PS4 and PC, and a host of other games for PC and mobile (including an announcement by Rovio to use Elias in upcoming titles) makes me feel like this technology has reached its maturity even as it is continuing to evolve and add new features. Recap of Previous Article Here is the song I wrote for that blog, using only Elias Studio and samples that are available for it. You can keep those cool and special melodies until the last minute, when the protagonist triggers the final climax of the scene and then go back down to the regular ostinato pattern. It is almost as if you are writing film music for a game, thinking about all those accents in the scenes and how to hit them. MIDI Walkthrough Video This is the meat of my current article, and I hope you can take the time to watch and enjoy it. Inside, I walk you through how easy it is to set up MIDI in Elias 3 and how great it sounds, and also talk about some of the other features included in Elias Studio, such as the mixer. Sample Libraries Used Elias has a growing collection of great sample libraries available for purchase that cover percussion, orchestra and even guitars.  I used all of these, except for the Elias Essential Orchestra which was created by Orchestral Tools in my demo and video.  I hope to do a separate review of the OT libraries!  I love the sound of both guitars, and the percussion in Percussion 1 is outstanding (especially the snares!) Conclusion My experience with Elias 3 has been fantastic.  I find it easy to use and intuitive to set up MIDI “generators” and tacks.  My example uses 100% MIDI, but I think it will be fun to combine both MIDI and audio in future demos to show how these two work together seamlessly (and sample accurately) in Elias.  Besides the MIDI features, the Action Presets, Mixer and Transition Presets are all easy to set up once you know what you are doing.  I will continue my series and continue to peel off layers that are hidden within this hidden gem of audio tools!  As I mentioned in my previous post, FMOD and WWISE are both fantastic middleware.  When combined with Elias I feel like there is nothing I cannot accomplish creatively! The unique combination of MIDI + world class Sample Libraries from Indiginus and Orchestral tools make this a solid choice for adaptive music. Lastly, I really welcome questions, so, ask away. And if I cannot answer them, then maybe we can ask Elias together and see what they say. I’m pretty sure they’re up to getting comments, suggestions and ideas for improvement. This Blog in other sources:

  • From Linear to Adaptive - How a Film Composer Fell in Love with Adaptive Music

    Introduction At the Game Developers Conference (GDC) in San Francisco, I was introduced to the adaptive music tool for game composers, Elias.  They showed me the newly introduced MIDI capabilities, and I began to think about how it compared to other adaptive music systems I had learned, such as FMOD and WWISE.  I was surprised to see a software 100% made for composers and had to check it out.  I’m a classically trained film composer, who recently entered the game industry.  The biggest struggle for a composer, whether he/she writes for film or games, is to figure out moments and moods and how to best portray them in music. Game composers have had to improvise and figure ways or preventing their tracks from sounding boring and repetitive. Middleware such as FMOD and WWISE were a great start down this path, but Elias seems to me to be the evolution of adaptive game music. In this first article in a series on adaptive music, I will explore Elias and my first impressions of it. A Little Background It was not until earlier this year that I was hired to score a video-game; the Zombie Hyper Reality Escape Room “Deadwood Mansion” by Glostation. I had a blast composing its three cues of about one minute each, with three layers and eight stingers. It was an eerie and ominous soundtrack with pads and sound design. I then started thinking about how different composing for Film and Games actually was. Music has been my life since I was seven. A Brazilian boy that loved music and sang, studied to become an Orchestra conductor and ended up getting his Bachelor Degree in San Francisco, California in Music and Sound for Visual Media. It was by meeting students from multiple departments at the University that I had opportunities to work for projects such as “Scaredy Bat” by Greg Perkins, scoring for Cannes and Tribeca Short Films like “Curpigeon” by Dmitry Milkin and my first over-twenty-minute visually-stunning short film “The Colors of Hope and Wonder” by Juan Diego Escobar Alzate. I also worked at Strawberry Hill Music, a studio that gave me a lot of experience on scoring for linear media, including by orchestrating and working with composer Raj Ramayya on multiple projects, such as Canadian Production feature “Chokeslam” and doing voice-over design for the mobile tower defense game “Realm Defense.” Film X Game Scoring Struggles Film composers struggle with a few things. The first struggle is finding the right instrumentation and genre that will fit the mood and characters of the movie. You know, if John Williams had scored the Imperial March using a Glockenspiel and a Flute, Darth Vader would probably not be so scary. Then, the second struggle is how to portray the director’s ideas that were presented to them both directly and indirectly. While some director are really good at telling the composer what they want to hear on a specific scene or theme, others speak in abstract ways, like “Can you make this scene a little more blue and with a taste of sugar?” The third struggle is to place the cues on important moments, grow, reach an accent and come back down again. Probably the hardest of them all, because the composer must read between the lines and get a sense of the full arc; it is like telling the same story through music and adding it to the visuals. And lastly, the final struggle comes in the form of self-criticism; we never think the music is good enough. Of course, any artists struggle with this one; it is part of who we are and if we do not have it, it never pushes us to create better and better art. Myself being firstly a film composer, I know those struggles all too well. However, when I composed the music for the video-game “Deadwood Mansion,” I found that the third struggle is a little different. While the film composer has to look at a scene and think of an arc that has a beginning, middle and end, the game composer has to think about ALL possibilities at once. When you think of Jack Wall’s score for Mass Effect 2, you realize he had to think of moods for scenes, such as romance, action, excitement and others, but he also had to think about layers and how to make the music increase and decrease intensity. Cutscenes are more like films, but an in-game cue requires the composer to think about how the score will interact depending on what the player might do and what the character might be thinking. Whether it was by using pulsing synths as a solid base first layer, adding pads as a second layer and solo instruments for the third layer, Jack still had to think about the implementation and when those layers would come in and how they would fade in and out. This is a struggle that I am sure all game composers go through and it is possibly the hardest of all four. The Perfect Loop Another important part of being a game composer is creating the perfect loop, which is not always the easier task. Sometimes a loop tail simply does not fit the loop head, even though you might be using the same instrument, the same time signature and the same note; sometimes you simply feel like you want to write a cue in 17/8. And what I figured was that synthesizers, mostly pads and drones, can be your best friend and worst enemies at this stage. When scoring the zombie-game, I had a few pads I created that had long tails and others pulsing synths that would keep going after the loop was over. These were my worst enemies, because the moment I cut the loop, they would not blend with the beginning of that segment. And another thing I realized was how much of the issue was because I had more than one track together and if some of those loops were to cut before and others after the end of the segment, they would blend perfectly. But as I was using FMOD, I had to blend instruments in less layers and had to create perfect loops of the exact same size, or else they would not fit. Stingers were my saviors as always. I ended up figuring out how to cut some synths tails and match their sounds to the beginning of the loops. Elizabeth Hannan explains this in a simple way, by comparing a loop to a color wheel in the article “Creating Seamless Loops,” where she picks the analogy of colors as the position of sound waves and proving that to create the perfect and seamless loop, “the end of the segment needs to perfectly match up with the beginning of the segment.” But there was still one thing that bothered me. If I could have separate tracks for two of my synths and they were to cut in different moments, the blend of the loop would be perfect, without having to work so hard to fitting sound waves, but somehow there was no way of implementing that on a game that I could think of (maybe through some champion scripting skills); not until I found out about Elias. A New Way of Thinking “Adaptive” with Elias Studio 3 Imagine being in an island alone. You’re given a small hut, a coconut tree and a small stone blade. For days, all you are able to do is eat coconuts and drink their water. Then, out of a sudden, a book falls from the sky, telling you how to build a house out of clay, how to start a fire with rocks and how to build a fishing spear and hunting traps. It might not be the life you’re used to of having a house, food at the grocery store and fire on a gas fireplace that lights up with the touch of a button, but with that book you can improve your life and probably survive for longer. Then, a few days later, a second book falls from the sky, telling you how to improve your house, growing your own vegetables and building fishnets that will get you more fish than a simple spear. If these books keep falling, maybe you’ll get your perfect life back. I see the beginning of the analogy as how composers started to develop music for games going through the 80s, with MIDI instruments, implementing it directly to the game, with little help tools and having to improvise constantly and not always succeeding. They definitely did with game soundtracks in the 90s, such as “Zelda: Ocarina of Time,” “Pokemon Red and Blue” and “Castlevania: Symphony of the Night.” However, after middleware such as FMOD and WWISE came along, it was as if our first book fell from the sky and gave game composers tools that made the workflow easier and the process less traumatic. Now, the second book fell from the sky and it is called Elias Studio 3. The interface for its new version, Elias Studio 3.  Looking through reviews and tutorials of its predecessors was enough to get me interested. I decided to give it a shot about a month ago and I can for sure say that I am in love with it. Not only because it allows you to create more layers, on as many tracks as you want (Woodwinds, Brass, Piano, Ukulele, etc.), but it also allows you to implement it by choosing the right stingers, adding multiple versions of a layer to create more variety, adding parts of loop that can be reused and even more important, they brought MIDI BACK! The agility settings are a way of choosing when a layer/level will change on a track: per bar or even customized beats. So, if you want your piano to only start on Bar 3, Beat 2, if you change to the parameter to a new layer/level, the piano will only start on Bar3, Beat2. As for the use of parts of a loop, I think it is very useful to save space. If you have a ten-bar Bass Drum loop, but it repeats every two bars, you can simply bounce the two bars and loop it inside Elias. And that can be used to save space for other tracks and levels as well, which saves a lot of the space audio files can take. Now, as for the MIDI part, I know most composers would think of it as, “Oh, unless I want that 8bit sound or a Vangelis-like synthesizer I won’t need MIDI in my track, thank you,” but imagine being able to add thirty tracks, ten layers/levels, have high quality samples in it and still have a low data usage! With audio, even using compressed files, if you add too many layers and tracks, the cue can turn out to be quite big on data usage, but using MIDI, the files are very small and you can have the database of Elias sample libraries, which after composing thirty cues for a game, might turn a soundtrack of 3gb of audio into less than 10mb of MIDI!! Of course you have to count the space the sample libraries will take, but it will still be less than using a complete audio soundtrack. From my DAW to Elias and how I saw the magic happen. When I decided to experiment with Elias, I wrote a 1:51 minute cue called “Journey Through the Edge of the World” and then I chopped it into layers/levels and tracks to import it to Elias. It took me a little bit and watching a few tutorials to understand how the software worked, but I got a hold of it and below you can check out how the cue was before in my DAW (I’m using Logic Pro X) and how it sounded in Elias by moving parameters and adding stingers. See the difference? I know you might be thinking you can do all of that with FMOD or Wwise, but you cannot. Not with these many tracks, the agility settings and without going into the scripting board. Elias was not made to substitute your current middleware software, but to assist you getting your tracks into them in a much better and more organic way, without having to repeat the same thing a hundred times and get a player bored. The MIDI setup I did have some issues with setting up the MIDI to work with Elias, but after checking how they did it in their Demo session, I figured it out. And I thought it was a good idea to share it with you. Below you can see my process on getting the MIDI to play in Elias as I know other composers might have a hard time figuring it out like I did. Please note: you can create MIDI in whatever DAW you use, however, if you want to preview how it will sound using Elias’ sample libraries, you will need a .sfz instrument player. The best choice is to download Elias Sampler for free when you get the package and add it to your DAW. Then you can compose using their instruments and when you import the .mid files to Elias, they will play as they did in your DAW. First, select “Generators” under the bottom menu. When you’re there, you can add new generators, or choose the ones in your session. In this case I used the Nylon Guitar (Renaxxance). Selecting on the bottom left menu, you can find the Patches to add. You can add one patch, or all patches for a specific library. You can select a specific MIDI channel for each patch or simply select Omni and mix them all in one sound. If you do that, I would advise on mixing the volumes properly. Then go back to your “Loop Tracks” section and select the track to add your MIDI generator to. And on the bottom left menu again, you will find the details, such as agility, fades, levels, progression and most important, the Generator and the MIDI channel you selected. Because the product is still in Beta, Elias is constantly updating and fixing bugs, but if you want to get ahead of the game and learn the software before everyone else, I would hardly encourage you to. They have a trial period of 90 days right now for whoever signs up for their subscription package. Conclusion To sum up, I’m a classically trained film composer, who recently entered the game industry and like every composer had and still have my struggles. The biggest struggle for a composer, whether he writes for Film or Games, is to figure out moments, moods and how to portray them. Game composers have had to improvise and figure ways of not getting their tracks to sound boring and repetitive and middleware such as FMOD and Wwise were like gifts from the Programming-God to make their workflow easier and more adaptive. However, after trying Elias for about a month and getting to know the route they are heading to, I encourage ALL game composers to try it out. Elias is not really a competitor to other middleware solutions, but more like a middleware companion. Its features are unique and open an even bigger range of possibilities for us to make our tracks more diverse and interesting. I would not be surprised if film composers try Elias and like it so much that they figure ways of using it for Films and linear media as well. Who knows? There are so many variables… This Blog in other sources:

bottom of page