Tuesday 16 June 2015

Marketing and Promotion

Marketing a product is an important thing to do before it's release. There are good ways of marketing a product and these techniques are:

  • Using social networking sites such as facebook, twitter and instagram.
  • Using soundcloud and youtube to upload previews of of the songs that will be on the E.P or album/mixtape
  • Launches that will help promote the music product pore even after it's release
  • Sending the music to radio stations and getting airplay

These techniques are useful and crucial ways of making sure people know about your music product and will help showcase your talent. A music product without promotion will not help your product get much recognition and not many people will know that you've released a great product to listen to. Marketing your product is the only way of exploiting your music to the public and yourself as an artist. So this is why it is important to exploit yourself as much as you can to ensure people know who you are and show them how talented you are. 

In order to make my music product successful I will need to consider the styles of music, the target audience, promotion and release date. The styles of music that are on my E.P are rnb, ambient and hip hop. My target audience will people aged 18 - 24 and I will promote my E.P by sharing previews of my songs on soundcloud and youtube and sharing my E.P's artwork to facebook, twitter and instagram. I will also need to think about the release date. The release date is a key thing in my E.P because I need to allow myself enough time to produce, record, mix and master the songs and get in touch with an artwork designer to make the artwork for my E.P and possibly make posters to hang up. This product will be a free digital download and no physical copies will be given.


I uploaded one of my E.P in April and so far it has received 64 plays, 1 favourite and 1 repost. In order to gain more plays and favourites I would need to post this song on facebook, twitter, youtube and post a picture of the song on instagram because these social media sites are used frequently on a daily basis and this is a good place for me to start promoting my E.P. By doing this people will know I have a music product in progress and they will be interested in downloading the fully finished product. Adding hashtags on mostly twitter and instagram is another good way of promoting a song from my E.P because hashtags is a benefit of more publicity more people will know more about me as an artist and what my goals are.


Monday 15 June 2015

Event Planning

Me and Jay organised an event to take place at Westminister Kingsway College and this event was for raising money for the 'Amy Winehouse Foundation'. Before the event took place we needed to book the venue, a date for the event, equipment list, refreshments and posters. Without this list of things the event would've not happened. So it was important for us to make sure we had everything we needed. We decided the event should take place in college on the 3rd June from 12pm - 2pm. So me and Jay spoke to Wendy who is in charge of booking the college for events and told her about how important it was for us to book the college for this assignment. The next thing we did is create and print out 40 posters and hang them up around Kings Cross and inside the college building so that people know about event and know when to come. This was a good way of advertising our event.

The next thing we had to do is give a tech spec to Grant. Before we gave a tech spec to Grant we had to make sure everything we needed was on the tech spec or otherwise things will have gone wrong. So we made sure all the equipment was on the tech spec before sending it Grant. However, we had to send the tech spec to him a week in advance and we sent to him much later. So this was a false error but we managed to get the equipment on the day of the event.

On the day of the event we brought 11 bottles of cola and 200 plastic cups but one of them was going to be used for donations. We used 1 cup for donations and 50+ cups to serve coca cola to people. As we were doing this me and Jay spoke to some people offering them a free cup of cola and asking them to make a small donation for charity and we explained to them why we were hosting this event. On the day of the event it was Jay's responsibility to bring all the equipment that was needed for setting up the decks and it was my responsibility to bring my memory stick and help Jay with setting up the decks. If I didn't have my memory stick with me I wouldn't be able to play music and if I didn't help Jay with setting up it would've taken longer for him to set everything up.

After setting all the equipment up and applying power to it Jay adjusted levels for the decks. The way we did this was by me playing a song and adjusting levels on the decks and Jay adjusting levels on the mixer. After we gained some levels for the decks I played 1 and a half hour set, however the set had to be cut off at the very end because it caused distractions for people taking exams. So I cut the music off and I helped Jay cut the power off and un-ring all the equipment and put it back in the trolley take it back to the music floor.

Next time I host another event in a group or pair I will need to make sure what I'm doing and I need to allow myself more than enough time to get everything in order because if I don't do any of these things then my event will not go well according to plan and the will be very likely to not happen. When organising events as an events management employee there is no room for error and it's important that everything runs like clockwork.

The images below show what me and Jay done on the day of the event.










Friday 12 June 2015

Budget

Studio Time - If I were working on a new project I will produce the music for it at home rather than in the studio because all I need to produce the music is a pc/mac, a pair of headphones and a daw to work on, plus, this will help me save time and money and I will be very concentrated on composing new ideas and extending those ideas into a full length song. However, if I were to work in the studio I would work at 'Bowerman Studios' for £35 per hour or £250 for 8 hour session. This studio has great hardware and software equipment and this equipment will be very handy to use in my project in terms of producing and recording. I can also use this studio time to mix down my songs as well as recording them and producing them.


Mastering - I would send my finished mixes to a professional mastering engineer because I know as a producer I shouldn't master my own music. If I want a great sound each of my songs the best thing to do would be to send them to a professional mastering engineer because they will have the best hardware and software equipment and EARS to make my pre masters sound professional just like commercial songs I listen to. I would send my pre masters to www.ebonyredaudiomastering.com because they charge £25 for 1 track to be mastered professionally and I have listened to examples of their work and I really like what I hear. They have also mastered music for BBC Radio 1Xtra and this means they are professional, reliable and dedicated engineers that take their clients seriously.

Artwork - For artwork I would pay £10 to an artwork designer and I will tell what I want it to look like and I will also tell to write whatever I want him to write. For example I will tell him o write my artist name and the title of the E.P. £10 for a 1 sided digital piece of artwork is a fair price because it isn't too high or too low. It's important for me to find an artwork designer who can create top quality artwork for artists and producers and charge very little for it. Also, artwork is what makes a music product stand out and capture people's attention. So it's important for me to make my music product stand out as much as possible.

Online Promotion - I would promote my music product myself and I would do this sharing the artwork of my music product on my personal facebook account and official facebook fan page. I would also share the artwork on twitter and instagram as well as facebook because these two social network sites are used by a lot of people and this is a beneficial way of letting people know I'm releasing a music product and it will convince a lot people to have a listen to it. Another way I can promote my E.P is by uploading songs taken from my E.P to soundcloud but not making them downloadable and I will write the details of each song in the description and this way the people of the public can have a preview listen of what's to come. I can also upload a video with 30 second previews of each song that will appear on my E.P to youtube and by doing this I am promoting myself more and I'm spreading the word to people about my E.P.

Radio Plugger - Instead of hiring a radio plugger I will send some of my E.P tracks to radio stations using http://www.bbc.co.uk/music/introducing/uploader because it's free to sign up and this will be a good way of sending music to radio stations and getting radio play for the music that will be on my E.P. It will also help me get more recognition as a producer and artist and this will be a good way of showcasing my talent to people. This website is a very useful site to send music to radio stations because I can select which radio stations to send my music to and I can send music 1Xtra as this radio station is listed and 1Xtra is where most RnB and hip hop tracks are played. I can also earn royalties for radio play as long as I have signed up with PRS.

I would make my E.P a free download if I wanted to but if I wanted to make a profit out of it I would sell it for £3.00 and if I managed to sell 500 copies of my E.P I would earn £270.00 profit. By making a profit out of my E.P it shows how much people appreciate what the hard work and effort I have put in to make my E.P project happen and it also shows how committed and dedicated I am as an artist and producer. I will have spent £1230 pounds to produce, record, mix, master my E.P and get someone to do artwork for my E.P and this is how I would've made £250 profit. 

Thursday 11 June 2015

Professional Roles

1) The roles I have filled in my E.P project are composer, producer, mixing and mastering engineer. I have composed, produced and mixed my E.P and I have mastered other people's music like Kipras for example. All of these roles were key roles in this project because the composer and producer roles are the key roles that compose the ideas for each track and structure the ideas into a full song. The mixing engineer role was a vital role in this project because the mixing engineer ensures that everything is balanced and is ready for the mastering phase. Finally, the mastering engineer role was very important because the mastering engineer adds the final touches to the mixed song and he/she ensures to achieve the best out of this track before it gets released on either itunes, beatport, 7digital, etc.

In this project I have worked with people from 1st yr and 2nd yr music and they're roles were singer-songwriter and performer. They're roles were important in this project as well as the other roles listed above because they were going to write the lyrics, practise their lyrics with the instrumentals I gave them and perform their properly in studio sessions I had with them. In these sessions I communicated with them telling them that I was going to record them in comping mode and I would tell them to lay backing vocals, ad libs or harmonies in whichever section in the song. I would also let them know about what's good about the recordings and I would tell them to re-record this section or another section because it was off key or not in time with the instrumental. This is how I managed to achieve the best out of the people I was working with and make the instrumental (s) sound like a proper professional song.

The roles I have filled in group recordings are technician and producer. The technician role was a very important role for me because as the technician it was my job to rig all the mics up to the equipment that was being used in those sessions and it was also important for me to make a note of which channel the mics were going into so that we wouldn't get confused with the cabling, make false errors and ensure the session (s) run smoothly. As the technician it was also important for me to un rig and put all the equipment back where I took it from as this for health and safety. As the producer it was important for me to communicate with the band I was recording and tell them when I was about to start recording them and when they must be silent before and after recording so that I can get a recording of them with no background noises and end the recording with an audio tail. It was also important for me to make the that every logic file was saved in the audio drive and labelled appropriately with the audio files copied to the project. If I didn't do any of this everything will have not gone accorded to plan.

2) I filled roles such as DJ and promoter for my events management. These roles were very important for me because if I wasn't at my event to play a 1 and half hour set or let people know about the event taking place, the event would've not gone to plan and it will have been a waste of other people's time because they would've come to enjoy the music I would be playing and they would've donated some money to charity as well. Also, if this event wasn't promoted enough there wouldn't have been many people at the event and it was important to invite as many people that were able to come along to the event. I worked on this event with Jay and the way worked around it was by what the event needed in terms of equipment, posters, fliers, venue, date, time, etc. We thought about these requirements carefully because there was no room for error and we did more than double check the list of things we needed for our event.

I have also filled roles as a live sound engineer and technician in gigs back in December, The Harrison Pub, 1st yr festival and end of year show. These roles were very vital in these shows because it was important for me to be on time for bringing equipment, rigging everything up, sound check and making sure there isn't a problem with health and safety. As a live sound engineer it was important for me to make sure levels on the mixing desk were green and make sure the equipment used in the performances were balanced. If I noticed anything that's was going above green it had to be turned down or otherwise this will cause feedback and it will make whichever instruments unbalanced. For the 1st yr festival and end of year show it took teamwork to bring all the instruments and equipment down from the music block and set it all up in the theatre.

3) I see myself in the industry as a music producer and dj. To make these roles successful I will need to make contacts with artists and session players about my projects and discussing collaborations. This will be a great way of making my music product look and sound professional because the music won't just be produced in the box and the music won't just be instrumentals. I will also need some experience as a dj and some examples of dj sets in order to get shows in clubs. It's not easy finding shows and the more experience I have the more likely I will be chosen to DJ for a show.

I will also have to contact people that do events management so they can help host and promote an E.P or album launch as this will help me get recognised by other members of the public and probably A&R representatives. Another thing I will need to do is join PRS and send music to radio stations because once I have joined PRS I earn royalties from my own music when it gets aired on radio and getting my music aired on radio is a good way for people to recognise me and other artists I have worked with and it will help my fanbase grow bigger.

Tuesday 9 June 2015

E.P Evaluation

What is your E.P called?

My E.P is called Voyage.

What styles of music are on the E.P?

They styles of music on my E.P is RnB, Ambient, Hip Hop.

What genre would you categorise the E.P as? It can be more than one.

I would say my E.P's genres are more RnB and Ambient because some of the songs have pad chord progressions and some of them have a slow tempo.

Musically - How well does your E.P fit into the genre you work in? Compare your work to your favourite artists.

My E.P does fit into these genres and I know this by comparing my music to RnB artists like PARTYNEXTDOOR & Jhene Aiko. When I listen to PARTYNEXTDOOR's music the sounds that are used in some of his songs are very ambient and mellow and when I listen to Jhene Aiko's music some of the sounds used are ambient as well but some songs are not always ambient and mostly have a slow tempo. The instruments used to make the rhythm in her songs fits in well with the sounds used because the instruments are not harsh sounding like hip hop or trap snares and hats and they have a subtle and smooth tone.

In terms of technical standard (recording/mixing) - are you happy with the E.P? Tell me about the strengths and weaknesses of your music.

I am happy with the recording of my E.P because I used very top quality mics such as the Rode NTK. The NTK brought clarity to the vocals I recorded and I wanted the best vocal recordings for my E.P because in this project I thought about what I wanted this final product to sound like and I was focused on making sure the vocals have clarity and presence. However, I did use the SE Titan to record Imani B because I didn't know the golden rule of the NTK at the time.

Mixing wise, I would say the tracks can be improved more. I would improve them by listening to the drums and making sure their frequencies are well balanced and by this I mean not too much or too little lo end for the kick and making sure the hi end instruments are not harsh sounding. I would also make sure everything is well balanced in volume making sure one instrument is not overpowering another instrument. I would make decisions on what should be the loudest instruments or sound in the mix and I would decide what instruments and sound need to be less loud.

Technically - How well does your E.P fit into the genre? Compare your work to your favourite artists.

I have compared 'Imani B - I'm In Love With You' to PARTYNEXTDOOR's song 'East Liberty'. I would say my song has a thinner sound than 'East Liberty'. When I listen to the kick, sub bass and snare of that song I listen to how fat, strong and punchy those instruments are whereas my song has a thin sounding snare, no sub bass but strong kick. So I would say this song, technically, doesn't fit in well as much into the genre. I have compared also 'Camilla x King Reginald - Don't Forget' to 'East Liberty' as well and I'd say this song fits in well with the genre because this song has a fatter kick, layered snare drums, bass and sub bass but I would say the mixing of the song can be improved and I'd improve it by making the vocals less loud, making the clap less loud and making sure there isn't too much lo end in the bass and sub bass.

Did you collaborate with other musicians/artists on your E.P?

Yes I did. I have collaborated with people in level 3 music yr 1 and yr 2. I have collaborated with Imani B, Camilla & Reggie.

Who did you work with in the studio? What roles did they fill? How did this help you complete your E.P? 

I worked with the same people in the list above and the roles they for filled were singer-songwriter and they helped me complete my E.P by writing the lyrics for the instrumentals I gave to them, rehearsed them and recorded them as soon as they were ready, plus they also recorded ad libs, harmonies and backing vocals around the lead vocals.

Which role are you most comfortable in? Why?

The role I am comfortable in is the producer role. This is because I am very focused on composing chord progressions, melodies, rhythm and basslines and I am very focused on structuring the ideas I've composed and I don't have any difficulty with getting the ideas down into whichever daw I am using.

What was the most important piece of equipment you used when making your E.P? How did you use it? Why was it most important? 

The most important piece of equipment I used was the Rode NTK mic. I used this mic to record all the vocals required for my E.P. This piece of equipment was very important to me because I wanted to bring the best out of the vocals. I didn't want to use a mic that doesn't bring the best sound for the vocals and the NTK was a brilliant mic to use in my E.P because I knew this mic is a valve condenser mic. For me, vocals are a key thing in a song that people like listening to and this is why it was important for me to make sure the vocals have clarity and presence. So this is why Rode NTK mic was such an important piece of equipment in my E.P.

Give me an example of a studio session that went really well. What was good about that session? Why?

I had two studio sessions with Imani B back in January and these two sessions went very well. They went very well because in the first session we managed to record all the vocals for the chorus sections. We recorded one chorus and I copied those recordings onto the other chorus sections so that we could save ourselves some time and not record them again. In the second session we managed to record all the vocals for the verses and bridge in one session and altogether, it only took two sessions to record the full song. Another reason why these sessions went well is because Imani didn't waste any time with getting his vocals recorded and he made sure to perform his vocals as if he were performing live.

What is your favourite track on your E.P? Why is it your favourite?

My favourite track from my E.P is 'Imani B - I'm In Love With You'. The reason why it's my favourite track is because I really like Imani's harmonies and backing vocals. These sets of vocals add a warmer and gentle sound around the music and other sets of vocals, plus, these sets of vocals are well layered with the lead vocals and this makes the track itself not sound empty and alone. Another reason why this track is my favourite is the track's mix. The drums are cutting through and so are the vocals and this is exactly how I wanted the drums and vocals to sound like in this track.

What are you going to do with your completed E.P? Send to labels? Post online for free? What are your plans for the future?

When my E.P is finished and ready for release I will upload it online and make it a free download. I will make my E.P a free download because this E.P will show the talent and experience I have and this is a good way of getting heard more. My future plans is to promote myself more, make contacts with singer-songwriters and session players for future projects, getting aired on radio and making contacts with events management for E.P or album launches.

What is the most important thing you have learnt when creating your E.P?

The most important thing I learned is parallel compression. Since I have learnt parallel compression I have been able to make most of my drums and vocals cut though the mix and make them louder. Another thing I have learned is getting the balance of the vocals right by pulling the faders down on the mixing desk and listening to the vocals carefully and making sure they are not too loud or too distant in the mix. Another thing I have also learned is compressing vocals hard without making them sound over-compressed and making them louder using the make up gain.

What is your most recent music purchase?

I haven't been purchasing music recently.

How much money do you spend on music every month?

I spend £1.98 on music every month.

How much money do you spend on going to gigs/clubs?

The gigs I have attended were a free entry and I don't go to clubs very much.

How do you plan to make a career in the music industry? What are your future plans?

To make a career in the music industry I will have to be very active in making contacts with music managers because they will be the ones who will get into contact with record label A&Rs and they will help me earn a place in whichever label. I also must make contacts with singer-songwriters and session players because they will be useful people for future projects and making contacts with events management is beneficial too because they will help me with hosting and promoting E.P or album launches and this will help develop my fanbase. Another thing I must do is join PRS so that I earn royalties every time my music gets aired on the radio.

Mastering

In this assignment I have been given three of Kipras' tracks to master. When I received the pre masters there was something wrong with all three of them. There was no headroom to work with and all three of them were heavily limited. I also noticed one of the song's vocals sounded too distant. I e-mailed Kipras to send me the same tracks with 6dB of headroom in each track and nothing in the output master such as limiter, compressor, EQ, distortion, etc and I also told him to use a 320kbps reference track so that he can get the correct balance of Huseyin's vocals before he sends it to me. I asked him to do these things because the limiter was messing up the dynamics of each song and the dynamics are an important thing to work with especially in the mastering stage. I told him to make the vocals louder in one of his mixes because the vocals is an important part of a song. People like listening to the vocals of a song and it's important to hear what the singer/rapper is saying, plus, the vocals is what brings a song to life.

Empathy Slow - Old School Love - I mastered this song at home using FL Studio. I used this daw because most of my mastering plugins live on that daw and I can work with this daw with ease. I dragged the pre master onto FL and I listened to it to make some decisions on what needs to be changed. I noticed the kick was too bassy and was the most overpowering thing in the mix. So I used an EQ and reduce the thump of the kick and reduce some of the lo end so that the kick is balanced and is allowing the other instruments to cut through. I made sure there was still some lo end because this track is a uptempo house track and this genre of music needs some lo end so that people can dance to it. I used another parametric band to make the body of the track stronger because the body of the track wasn't strong enough and needed boosting. The body of the track is another important thing in electronic music because the body of a electronic track give the track life and energy and this was an important key thing for me to do.

After using EQ I used compression and I used compression to glue the peaks and make the waveform balanced in volume. With the peaks compressed it will be easier to make the song louder as everything will be balanced and level. I used a slow attack because I wanted to let the transients punch through and the transients are an important thing in a song and if the transients were compressed the song's dynamics will have no breathing space and everything will be squashed too much. I used a fast release and I used a fast release because I wanted the compressor to work fast. I didn't want the compressor working slow because a slow release will compress the song for too long and this can lead to over-compressing and it will mess up the song's dynamics. So it was important to keep the release short. Finally I used a small ratio and small threshold and I used the parameters this way because I wanted a small amount of these parameters so that I can compress the peaks of the song and make sure they are levelled out and this way it will be easier to make the waveform look like a brick wall.

I then used stereo spread on the song because it didn't sound wide and using stereo spread is a good way of making a track louder and wider. I wanted this song to have a wide sound because stereo spread adds more energy to a track, especially a electronic track. I used the stereo width parameter and I listened very carefully as I was pushing it because I didn't want too much stereo spread. Too much stereo spread can lead to a track sounding out of phase and I didn't want this sound for this song. I wanted to make this song sound as wide as possible without making the song sound out of phase and this is why I opened my ears and listened very carefully to what sounded wide enough and what sounded out of phase.

After using stereo spread I used a plugin called 'Fruity Blood Overdrive' and this plugin is used to amplify the frequencies in the song and make them louder and this plugin only distorts a song when too much pre amp gain is used. I only used the pre amp gain to make the song louder and as I was doing this I listened very carefully to the song as I was pushing so that if it starts distorting I can reduce a bit of the pre amp gain and make sure it's loud but not distorting. My target was to make the song hit zero constantly throughout the song and the blood overdrive wasn't enough but I knew if I put a bit more the song will distort and the song will sound crushed.

The last thing I did is use a limiter to make sure the song is playing at one level and compress the peaks a little bit more. I used the limiter by over-processing it and I over-processed it by by turning the output to -20dB and turning the gain by 20dB. I then turned the release to zero so that I can listen to the horrendous distortion and from there I would push the release until I can no longer hear the song distorting. After pushing the right amount of release I would reset the gain and output to zero and I over-processed the gain again so that I can listen to the song heavily limited and reduce the gain until the song sounded stable and not heavily limited.

Empathy Slow - You and Me - I listened to next track and I noticed this track had more lo end than the first track and this overpowered everything else in the song. So I shelved out more lo end than the first song because I wanted to make the frequencies balanced but I made sure I didn't shelf out too much lo end because this song needed some lo end as this track is an uptempo dance track. I then used 3 parametric bands and these bands were used to reduce the thump of the kick, make the Huseyin's vocals louder and make the body of the track stronger. I used these parametric bands this way so that I can make the song sound well balanced in and give the song more life. The last thing I used is a hi shelf and I used this band to make the song sound cleaner and brighter. It sounded dull and muffled without the hi shelf and I didn't want this sound for the final finish.

I used more compression than the first track because the more peaks in this track and they were a bit bigger. I set up the parameters the same way I set them in the first song but I used a lower threshold so that I can squash the peaks as much as I can. However, if I listened to the song being compressed too hard I would need to make bring threshold up a bit and make sure the song doesn't sound compressed and that the dynamics have some breathing space and the transients are not being squashed. I noticed the compressor gave more headroom and I wanted the song to play at roughly -6dB and I did this by using very little make up gain to ensure the song was playing at that level.

After using EQ and compression I used the same plugins from the first song to add the final touches to the second song and I repeated the same steps of using the limiter because over-processing the limiter is a good way getting the right amount of limiting and making sure the limiter isn't working too hard, plus it's useful for glueing the peaks more.

Technusk - In the third song was tiny bit bassy and subby. So this meant I didn't have to shelf out much. I only had to shelf out a little bit of the lo end to make sure the bass level wasn't drowning the the other instruments and sounds. I used two parametric bands to reduce a very little amount kick drums' thump and make the body of song more lively and finally I used a lo pass filter to cut of some of the hi end because the hi end sounded very harsh and piercing and this isn't the sound I wanted for the finished master.

I then used very little compression because there wasn't many peaks in this song. I parameter I changed is the threshold. I used a small threshold because I didn't need to squash the peaks that much and there wasn't many peaks to squash. Like the first two songs, I made sure dynamics had breathing space, the transients not being compressed and making sure the compressor wasn't squashing everything hard. I then used a tiny bit of make up gain to make sure the song was playing at -6dB.

From there I used the same plugins from the first track to add the final finishing touches before bouncing all three finished masters individually and burning them into a cd.

Thursday 4 June 2015

Summer Gigs

Equipment List:

14 XLR Cables
Mini Jack - Jack (Laptop)
Audix D6
4x DI Boxes
9x vocal mics (2 wireless)
8x Jack - Jack Cables
Bass Amp
2x Guitar Amps
V Drum Amp
Headphones
Midi - USB Cable (Midi Keys)
Mic Stands
Risers
Guitar & Bass Stands
Floor Wedges
Mixing Desk



These pictures show the staging and connections of the equipment that will be used in the end of year show. Instruments like the bass, electric guitar and acoustic drums don't need D.I boxes because the sound they produce is loud enough for the whole audience to hear. However, the kick drum on the acoustic drums will require the audix d6 because the kick isn't loud like the cymbals or snare drum. The vocal mics will be plugged in directly into the floor box and the D.I boxes will have jack - jack inputs first and then they will be plugged into the floor box.

Wednesday 20 May 2015

E.P Production Diary: Mixing

Track 1 - I mixed the first track in the college's recording studio because the studio's monitors deliver accurate sound and frequency and the studio's monitors are more reliable than headphones. I the first track into individual stems without processing and I bounced them this way because I can work on each stem individually and they will be easier to work with without them being processed already. Bouncing the stems with processing will make it tricky for me to work with because the stem (s) will have already have been processed and I can't do anymore processing to them. I put the track's stems into a folder labelled 'E.P Project (Stems)' so that I know where to bounce the stems. I then created a new folder in the folder I made and labelled it 'Track 1 (144bpm)' so that I know I have to bounce all the stems to track 1's folder and I know what the tempo of the stems are.

From there I took the folder with track 1's stems using my memory stick and I opened a new logic file, set the tempo to 144bpm and imported all the stems into the logic file. I then saved the project with 'copy all audio files to project' ticked because of this option was not ticked the next time I open the project all the stems that were there will have gone. I saved the logic file into my folder in the audio drive and labelled it 'E.P Track 1 (Mixing Phase)' and this helped me with knowing where to go if I wanted to work on this project again. I pulled all the stems' faders down so that there is roughly 6dB of headroom in the output master and it will help me with the instruments that are too loud or too quiet and it will help with improving their sound.

Drums - I grouped all the drums to one bus and I parallel compressed them using sends. I used parallel compression for the drums because parallel compression is really useful for making drums sound loud, fat and in your face and this is what I wanted the drums to sound like in the first track. I didn't want the drums to sound buried in the mix and parallel compression allows the drums to punch through the mix and give the drums more life. I used the 'FET' circuit to compress the drums because the FET circuit allows the drums to duck in the mix and is really useful for adding extra punch to the kick mostly. I used a long attack because I wanted the transients of the drums punch through and I wanted the drums to stand out in the mix. Without the transients in the drums the dynamics of the drums will have disappeared and the drums would sound very over-compressed. I used a short release because I wanted the drums to duck fast. I didn't want a long release on the drums because a long release can lead to the drums sounding over-compressed and it won't help the drums to punch through strongly.

I then used a small ratio and low threshold. I used a big threshold because I wanted to compress the drums really hard and I wanted to make the drums sound more pokey. Another reason why I used a big threshold is I can push the drums and them louder using the fader. I used a small ratio because a big ratio will cause too much compression to the drums and this will give no breathing space for the drums to punch through the mix, plus, it will make the drums sound buried and faint in the mix.

As I was using parallel compression I used the sends on the kick, snare and hats to adjust how compression these instruments should have. I used more of the sends on the kick and snare because these instruments and the two main instruments that give the song rhythm and character. I didn't want to use too much on the hats because adding too much send on the hats will overpower some of the instruments in the song and hats and hi end drum instruments in general are laid back in the mix. They are not the most loud/dominant instrument.

After using compression I used EQ to make the drums stand out even more. I used a lo shelf band to make the kick sound more bassy and stronger. I then compared how the kick sounded without the EQ and the lo shelf improved the kick drum a lot because the kick sounded much more subby and punchy. I used a parametric band for the snare and I used a parametric band because the hit of the snare didn't punch through enough and sounded very faint. The parametric band helped the snare hit harder and the way I know is by turning off the parametric band and listening to what the snare sounded like before and turning the band back on and listening to the improvement made. The last band I used is the hi shelf band and I used this band for the hats only because the hats sounded very harsh and tinny and I didn't like this sound to be in the mix. The hi shelf band reduced those tinny frequencies and it made the hats sound the same but more subtler and not so harsh.

I looked at the parallel compression's level to check for clipping and the channel maintained a steady level and did not clip. If this channel was clipping I would've had to use a limiter to stop that channel from clipping but in this case the parallel compressed drums didn't need a limiter.

Before grouping the parallel compressed drums and grouped drums I adjusted the faders on both channels because I wanted to listen to how the drums sounded like in the mix. I didn't want to put too much gain on the aux because this will make the drums sound too over-powering and I will only hear the parallel compressed drums more than the grouped drums. I used my ears my ears for this job and I listened very carefully to what sounded right. I used my ears to balance out how much of the grouped drums I should have and how much of the parallel compressed drums I should have. I relied on my ears more than my eyes because using my eyes will make me lose focus and my ears will be confused with what sounds good or bad.

Finally I grouped the parallel compressed drums and the grouped drums to one bus and this bus will the drums master bus and this bus will allow to control all the processed drums in one fader and this will help with adjusting the right amount of gain for the mixdown.

Pads, Piano & Sweep FX - For the pads I used EQ to roll of some of the unwanted lo end. I rolled the lo end because this lo end wasn't needed and without rolling off the lo end in the pads the kick wouldn't have cut through the mix. I then used a hi shelf to make the pads sound brighter and cleaner. The pads didn't have enough brightness and I wanted my pads to sound clean, bright and not faint or dull. I then raised the faders on the two pad because they sounded too quiet and the pads are the main sounds in the 1st track so it was important for me to make sure that they are at the right hearable level.

For the piano I used EQ to roll of some of the unwanted lo end like I did for the pads because the lo end was getting in the way of the kick punching through. I then used a hi shelf like I did for the pads to make the piano sound brighter because the piano sounded very dull and faint. I didn't want the piano to sound like this in the song. I wanted the piano to sound clean, bright and crisp and the hi shelf helped the piano gain that sound. I then used the fader like I did for the pads to make the piano sounder louder but not too loud because if the piano was too loud the piano will be the most overpowering thing in the mix. So I adjusted the fader on the piano until it sat in well with the pads and drums.

For the sweep fx I didn't use EQ, compression or limiting because as I soloed it and listened to how it sounded it sounded perfect already. So there was no need to do any processing to it. The only thing I did was use the fader to ensure the fx is a tiny bit louder than the piano because if there was too much gain on the fx it will cause the instruments to drown out and I didn't want the sound effect to do this in the final mixdown and song itself.

Vocals - For mixing Imani's vocals I grouped the lead vocals to one bus, ad libs to another and repeated the same method for the backing vocals and harmonies. I grouped the vocals because it's easier to mix each set of vocals in one aux rather than having loads of plugins on the dotted on the other channels. Another good thing about mixing vocals in one aux is I have complete control over the vocals and this means I can decided how loud or how quiet each set of vocals should be.

The first set of vocals I mixed were the lead vocals. I used EQ to roll of some of the lo end because the lo end wasn't helping the kick to punch through the mix and the lo end in the lead vocals wasn't needed because the lo end was background sound from the recordings. I then used parametric bands to locate the nasally parts of the vocals and take them out. I did this because I wanted to clean up the vocals and make them sit well in the mix. The nasally parts in the lead vocals didn't make vocals sound clean and didn't work well in the mix. So it was important for me to get rid of those sounds. I then dragged the same EQ I used for the lead vocals and pasted them onto the other sets of vocals because the other sets of vocals will have the same sound like the lead vocals and the EQ I used for cleaning up the lead vocals will be used to clean up the other sets of vocals and it doesn't waste time.

After EQ'ing the lead vocals I used compression to reduce some of the loud parts. I did this by using a small ratio, low threshold, steady attack, steady release and the vintage opto circuit. I used the compressor this way because I wanted to compress the lead vocals very hard and to help maintain the dynamics of the lead vocals. Another reason why I compressed the lead vocals really hard is because I wanted the compressor to hold the lead vocals at roughly one level. By using a steady attack and steady release it helped the transients of lead vocals punch through the mix and using a steady release helped the vocals not sound over-compressed but compressed very hard.

I used the vintage opto circuit because the vintage opto circuit distorts things and the vintage opto allows me to compress things such as vocals very hard without making the vocals sound over-compressed. A circuit like the platinum would make the vocals sound over-compressed and this circuit doesn't deliver the same sound like the vintage opto does. The vintage opto circuit is the best circuit to compress vocals really hard in my opinion because I can compress the vocals very hard and they still won't sound over-compressed.

Using a low threshold also helped with compressing the vocals very hard because if a instrument, sound or vocal goes over the threshold the compressor will kick in and using a very low threshold on the lead vocals caused really hard compression. Using a small ratio was handy as well because the ratio controls how much of the threshold to compress and using a small ratio helped with how hard the lead vocals should be compressed. If the lead vocals were being compressed too hard I can adjust instantly adjust ratio until the vocals compressed less hard.

I then copied the same compressor onto the other sets of vocals but the only parameters changed were the threshold and release. I did this because I didn't want the other sets of vocals to have the same amount of compression the lead vocals have. I wanted to let some of the dynamics in the other sets of vocals have breathing space. If the other set of vocals were compressed like the lead vocals all the vocals with the other instruments and sounds will sound too over-compressed and the dynamics in the other sets of vocals will have vanished.

I used another EQ after the compressor and this EQ was going to be used for the tone. I used a hi shelf band and two parametric bands. I used a hi shelf band because the hi shelf band gave the lead vocals clarity and sizzle and this is the sound I wanted to hear in the lead vocals. I then bypassed the EQ to hear what the lead vocals sounded like before and after and the hi shelf band improved not only the clarity but the tone as well. I used two parametric band to give the lead vocals body. The lead vocals sounded like there was no body in them whatsoever and by using two parametric bands I managed to boost some of the mid and lo mid frequencies and this is what gave the lead vocals body and character. From there I dragged the same EQ from the lead vocals and pasted them onto the other sets of vocals so that the other sets of vocals sound reasonably the same like the lead vocals.

I then used a De-Esser in the lead vocals to reduce the amount of the sibilance. There was too sibilance in the lead vocals and it sounded very harsh to listen to. I detected the sibilance by using a multimeter and soloing lead vocals and I noticed the sibilance was around 15kHz - 16kHz. So I used the suppressor on the De-Esser cut off some of the sibilance. I used the frequency to position where the sibilance is coming from and I used the strength to reduce the sibilance. I then used smoothing to control how fast the De-Esser should react and I did this because I didn't want the words of the lead vocals to sound like they have disappeared because this wouldn't be a clear sound for the mix. After De-Essing the lead vocals I bypassed the De-Esser to listen to what the vocals sound like before and what they sounded like after and the De-Esser helped reduce the amount of sibilance and helped maintain the clarity of the vocals.

I then copied the De-Esser I used for the lead vocals to the other sets of vocals apart from the backing vocals because the backing vocals have no sibilance and I can adjust the De-Esser on the other sets of vocals much faster to drag and paste the same De-Esser plugin used for the lead vocals and adjust them from there. Also, the other sets of vocals are likely to not have the same amount sibilance as the lead vocals and if I kept the De-Esser's parameters the same like the lead vocals the words being pronounced in the other sets of vocals won't punch through the mix.

I then panned the panned the ad libs, harmonies and backing vocals left and right because I wanted widen them. I didn't want to keep them central like the lead vocals or they would clash with each other and it will cause all the vocals to sound muddy. By panning these vocals the stereo image sounded louder, wider and more equal. I didn't pan all the vocals hard left and hard right because I wanted the right amount of panning for each set of vocals so that they all have their own space. I panned the vocals using the mono channels that were grouped to a stereo bus. I panned the backing vocals hard left and right, I then panned the harmonies to -32 and +32 and I panned the ad libs -16 and +16. After panning the vocals they all have their own space in the stereo field and they sounded much louder, wider and not muddy sounding. The lead vocals were going to a stereo bus and the plugins used for the lead vocals were stereo. So I used a direction mixer to make the lead vocals mono.

From there I used sends for the lead vocals only and these sends were going to be used for reverb and delay. I used the tape delay plugin because the parameters that are in this plugin are really useful and not hard to use. An example of this is the tempo sync function. The tempo sync allows the delay react with the tempo of the song and this comes in very handy. I used this parameter because I wanted the delay of the lead vocals to be synced with the tempo of the song. I didn't want the delay to come in slowly because it would've messed the flow of the song and will have caused some clashing.

Another useful parameter of the tape delay is the lo cut and hi cut. This parameter is like an EQ but only has filter bands. This parameter is really useful for cutting off frequencies in the delay in the delay and it helps the delay not clash with other sounds and instruments. I used this parameter to cut off some of the lo end in the lead vocals because this is the same unwanted lo end I cut off using the parametric EQ on the aux that had all the lead vocals grouped together. I knew this lo end would get in the way of the kick drum punching through and this is why I cut off the unwanted lo end.

The last thing I used on the tape delay is the feedback function and this controls the amount of delay. I used very little delay because using too much delay will drown out the words in the vocals and too much delay could over-power the song. To do this I used my ears and listened to the vocals very accurately because I rely on my ears more than my eyes and my ears are more accurate at listening to what sounds balanced and what the tone sounds like as well. I used the send while the song was playing to listen for the delay playing in the song and as soon as I heard the delay on the song I stopped pushing the send.

Before using the space designer I copied the two EQs from the lead vocals onto the reverb bus and I put the space designer after the EQ that takes out the nasally parts of the vocals so that the lead vocals with reverb sound exactly the same as the processed vocals on the buses. From there I used  the space designer reverb because the space designer has a lot of very useful presets and the space designer has big sounding hall and large room presets. I soloed the lead vocals, pushed the sends to zero and went through some hall and large room presets until I found a preset that sat in well with the vocals. I then listened to the lead vocals un-soloed and I noticed there was too much reverb. So I turned the reverb send down until the reverb sat in well with the instruments and sounds and this helped everything become balanced.

The next thing I did to the vocals is parallel compress them. I used parallel compression because parallel compression is a very useful method of making the vocals louder and cut through the mix and I wanted the vocals to be loud in the mix. I selected all the vocals and used put a send on them and I went onto the aux channel and I opened a compressor and I didn't make the parameters of the compressor the same like the compressors on the other auxiliaries. Before adding and setting up the compressor I added copied the EQs I used for the lead vocals onto the parallel compression auxiliary so that when I use the sends the vocals will sound exactly the same like the other auxiliaries. I put the compressor after the EQ that takes out the nasally parts of the vocals and this helped the vocals sound the same like the other auxiliaries.

Before setting the compressor's parameters up I experimented with every circuit apart from the platinum and vintage opto circuits because the platinum circuit will make the vocals sound over-compressed and I knew this circuit wouldn't be a useful circuit to use for parallel compression and the vintage opto circuit is already being used in the other auxiliary channels. I wanted to compress the vocals hard but not as hard like on the auxiliary channels. I really liked the vintage vca circuit because this circuit allows the compressor the respond fast, it gives the transients character and like the vintage opto circuit it can compress the vocals very hard without making the vocals sound over-compressed or squashing the transients of the vocals.

I kept the ratio the same like the compressor used in the lead vocals so that the vocals have the same amount of compression just like the other processed auxiliaries and 3.0:1 is useful ratio and is mostly in vocals. I also kept the release the same because a long release will compress the vocals for too long and this could lead to the vocals sounding over-compressed and this doesn't help with the vocals cutting through the mix. I used a longer attack because I wanted the transients of the vocals to punch through and it's important the transients whether it's a sound, drum, vocal, etc the transients must cut through the mix. I also changed the threshold and I changed the threshold so that the vocals are not being compressed too hard like the vocals on the auxiliaries and allowing the most of their dynamics to cut through the mix. The last thing I did is put 6dB of make up gain and I used 6dB so that the quietest parts of the vocals are louder and not sat well back in the mix.

I selected the lead vocals first because the lead vocals must be the loudest thing in the mix. I used the sends on the lead and I used my ears while I was pushing the sends to listen the lead vocals cutting through the mix and as soon as I heard the lead vocals punching through the mix I stopped pushing the sends. I then repeated the same strategy for the other sets of vocals but as I was doing this I had to make sure the lead vocals were the loudest thing in the mix. For example I didn't want the backing vocals to be louder than the lead vocals. I wanted the lead vocals to be loud but sat in well with the instruments and sounds. The lead vocals being too loud can cause very little space for the instruments and sounds to cut through.

The last thing I did is group the processed vocals and parallel compressed vocals to one aux and this aux will be the vocal master aux. I grouped all the processed vocals and parallel compressed vocals to one bus because I can control all the vocals that have been processed using one fader and this allows me to control how loud all the vocals should be and it's easier to control the vocals using one fader rather than using lots of faders.

Track 2 - I mixed the second track in the college's studio because I knew the college's recording is a reliable environment to mix in and I know the studio's monitors will deliver accurate frequency ranges of instruments, sounds and vocals. I repeated the same steps before mixing track 2 and that was to bounce the stems onto a clearly named folder with the track's bpm and take them into college using a memory stick. Then import them into logic and save the logic file with the stems copied onto the project so I din't lose the audio files and name the logic file appropriately so that I know the file belongs to me and save the file into the audio drive as this area saves projects along with their audio files.

Drums - I used the same method for mixing the drums on track 1 to mix the drums on track 2. I used the same method because parallel compression is a unique method for making the drums sound pokey, loud, punchy and helping the drums cut through the mix. I used the same amount of buses like I used for track 1 and I labelled all three of them appropriately so I what should go where. The first bus will have no processing, the second bus will have a compressor and EQ and the third bus will be the drums master bus. I made the parameters of the compressor the same as the compressor in track 1 because when I listened to the parallel compressed drums on track 1 they cut sounded loud, fat, pokey and punchy and I wanted the same sound for the drums on track 2. The drums are the most important thing in the song and it was important for me to make them loud as possible. I then used an EQ on the pcomp bus so that I can boost mostly the lo end and hi end of the drums and the EQ is what makes them fatter and crispier. I did use one parametric band and that band was used for making the hit of the snare drum stronger because the hit of the snare drum wasn't cutting through the mix enough and it was crucial to make the snare cut through the mix throughout the song.

After setting up the pcomp bus I selected all the drums and I used the sends to send the drums into the pcomp and I listened very carefully to what I can hear so that if an instrument isn't meant to be louder than another instrument I can turn the send down on that instrument. As I was pushing the sends on the drums I noticed the hats were too loud and the finger snaps were too loud. So what I did is I selected the finger snaps and the hats and turned the sends down until they sat in well with the rest of the sounds and drums. However, I turned the send down on the finger snaps too much and the finger snaps are one of the main instruments in the song. So I selected all the finger snaps and pushed the sends up until the were roughly 1 - 2 dB behind the drums because if the finger snaps were level with the kick drum the finger snaps will be too loud and overpowering. There wasn't any clipping in the pcomp bus so there was no need to use a limiter.

I then used reverb on two of the finger snaps because the finger snaps sounded dry and I wanted them to sound wetter and make them fit in with the ambient pad. I used the space designer because this reverb has a lot of useful presets this will help me find the right reverb for the finger snaps. I used sends on the finger snaps so that I control the amount of reverb the finger snaps should have. I pushed the sends until I could hear the reverb in them and once I could hear the reverb I stopped pushing the sends and this is how I made the finger snaps wetter.

After grouping the un-processed drums and pcomped drums to a master bus I used the master bus to control how loud the drums should be. I didn't want them too loud because making the drums too loud will over-power throughout the song and I wanted some space for the main chord progression, melodies, bass & sub bass and sound effects. So I listened very carefully to drums to know whether they should be louder or a bit quieter. I noticed they were a tiny bit too loud so I turned the fader down by 1dB and they were still a bit too loud. So the I pulled the fader down by half a dB and this made the drums sit in with the sounds used in the song.

Pad, Piano, Bells, Bass, Sub Bass and Sound Effect - I didn't group these sound because it was there wasn't many channels and it was easier for me to make them louder using their own faders and it was easier to solo each channel, listen to them and make decisions on how I can make them sound better in tone and clarity. I soloed the pad first and I opened a EQ so that I can look at it's frequency range and make decisions on what needs to be cut off and what needs to be pushed. I noticed there was some unwanted lo end in the pad and this would not allow the kick drum to punch through. So I used a cut off band to get rid of that lo end and this meant I can push the pad without it clashing with the drums. I then used a hi self band to make the pad have more clarity because the pad didn't have enough clarity at first and the hi shelf band made also changed the tone of the pad. The last thing I did to the pad is use the fader and pushed it up because the pad was sounded distant and not loud enough.

I then repeated the same steps for the piano and bells only because the bass and sub bass shouldn't have their lo end cut off because the bass and sub bass provide body and warmth to the song and cutting the lo end off these two sounds will make the bass sound thin and weak. I only made changes to the bass because the bass didn't have enough lo end and this made the bass sound weak. So I used a lo shelf and boosted the lo end of the bass and this helped the bass sound stronger, subbier and fatter. I then used the the faders on those two sounds and pushed them up so that they don't sound distant but sat well in the mix. The only thing I did to the sound effect is use the fader because the sound effect sounded perfect in tone and didn't need any processing. The only problem with the sound effect is it was very quiet in the track and had to be pushed up.

The piano melody was in mono and this didn't make the piano sound loud or wide. So I duplicated the piano's channel and copied the stem onto the duplicated channel and I panned both the channels hard left and hard right to make the piano sound stereo. When I listened to the piano in stereo the piano sounded so much wider and it gave the piano more presence.

Vocals - I mixed the vocals the same way I mixed the vocals for track 1 but I only used one bus because the other sets of vocals haven't been recorded. I processed the lead vocals and instead of using 2 EQs I used 1 instead. I used 1 EQ because I used the NTK to record the vocals and the NTK didn't make the vocals sound muffled or muddy and the NTK gave the vocals clarity. So I only used one EQ roll off some of the lo end and cut off the nasally parts of the lead vocals.

I then compressed the lead vocals really hard by using a low threshold, low ratio, fast release, slow attack and vintage opto circuit. I set up the compressor this way because I can compress the vocals really hard without the vocals sounding over-compressed. The release is the key parameter that decides how long the vocals should be compressed for and using a short release allowed the vocals to be compressed hard without taking away their dynamics. The attack is another important parameter because the attack is how quick you want the compressor to react and using a slow attack allows the transients of the lead vocals to punch through and transients are the most important things to listen to. I used soft distortion on the lead vocals and the soft distortion made the vocals a bit more louder and gave the lead vocals some sizzle and crispiness.

I then used a De-Esser to get rid of the sibilance on the lead vocals. I did this by opening a multimeter before opening the De-Esser so I can look at the frequency range of the lead vocals and detect where the sibilance is coming from. The sibilance was around 14 - 16 kHz. I went back to the De-Esser and I pushed the frequency on the detector to roughly around 16kHz because 16kHz is a harsher frequency and is where most of the sibilance lives. I then kept the sensitivity at 50% because the sensitivity is how sensitive it is to sibilance and by pushing the sensitivity up more sibilance is going to be suppressed. So keeping at 50% was the best way of reducing not too much and not too little.

From there I used the frequency on the suppressor and I pushed it to between 15 - 16 kHz and I used the strength to reduce the sibilance and make the lead vocals sound subtler and softer. The last thing I used is the smoothing and this parameter is how long it takes for the De-Esser to reduce the sibilance. I used 20ms because I wanted the vocals to be De-Essed and at the same time, hear the words of the vocals being pronounced clearly. I then played the song with the vocals and I bypassed the De-Esser so that I can listen to what the vocals sounded like before and what they sounded like after and by listening to the vocals with the De-Esser the vocals sounded less harsh and more subtle.

The next thing I did is parallel compress the vocals because parallel compression will make the vocals louder and they will cut through the mix. I copied the EQ from the lead vocals onto the PCOMP bus and copied the De-Esser after the compressor so that the vocals sound the same like the vocals that have been processed. I then set the compressor up the same way I set the compressor on track 1 because I wanted the transients to not be compressed and I wanted to not compress them as hard so I the dynamics of the vocals have more breathing space. I then selected the lead vocals and pushed the sends until I could hear them cutting through the mix and playing at a reasonable level.

After this I used sends for reverb and delay. I used these effects on the lead vocals only and I used them on the lead vocal because they sounded dry and it didn't fit the sound of the song. I copied the EQ from the lead vocals to the delay and reverb buses so that I can take out the horrible sounding bits of the vocals in the reverb and delay. I used the tape delay and space designer reverb because the tape delay adds a stereo effect to the lead vocals and makes them sound wider and the space designer has a lot of useful presets and I can choose which one I like the best, plus, the reverb will make the vocals wetter. After setting up the delay and reverb I selected the lead vocals pushed the sends on them until I could hear the delay and reverb playing. Once I could hear the reverb and delay I stopped pushing the sends.

I kept the lead vocals central so that when I record the other sets of vocals I can pan them left and right and by keeping the lead vocals central there is more space in the stereo field for the other vocals to play in. I then made sure the lead vocals were balanced with the other instruments and sounds. I had to make sure the lead vocals were loud but not over-powering and dominating the other instruments and sounds. If the lead vocals were too loud then this will have given no space for the other instruments and sounds to punch through.

Track 3 - I mixed track 3 in the recording studio because I know the recording studio is the most reliable environment to do mix downs and masters. The genelec monitors are the most reliable monitors for me to use in the mixing phase of a song. I bounced the songs stems and I put them into logic with the correct tempo set and I named and saved the logic file into the audio drive with the stems copied onto the file so that I don't lose my work in progress.

Drums - I used parallel compression like I did on track 1 & 2 because I know parallel compression is the best way to make my drums louder and allow them to cut through the mix. I used 2 buses to group the drums and control all the drums like I did on track 1 & 2 and 1 send so that I can send the drums into the PCOMP bus. I set up the compressor the same way I set up the compressor on track 1 because I know this setting helped my drums sound louder and pokey and it's the sound I wanted for my drums. After setting up the compressor I selected all the drums and I pushed the sends into the compressor until I could hear the begin to get louder. I then stopped pushing the sends on the drums and I turned the sends down on the finger snaps, hats and one of the snare drums and I did this because I noticed they were too loud and this would make the mix sound very unbalanced. I wanted the kick and finger snaps to be the loudest in the drums and the other instruments to be sat in and not laid back too much.

I then used EQ to push the kick more and make the kick sound more thumpy. I used a lo shelf band and parametric band so that I can increase the lo end of the kick more and make the transient of the kick knock more. After EQ'ing the kick I bypassed the kick to listen to what it sounded like with and without EQ and the EQ made big improvements to what the kick sounded like before. The last thing I did is I used a hi shelf to reduce the hi end in the hats because the hats had too much hi end and this made the hats sound very piercing and harsh. I wanted the hats to sound subtle and I wanted the hats to sound balanced in frequency with the rest of the instruments and sounds being used.

I used sends on one finger snap and clap and I did this because I wanted to make these instruments sound wetter and I wanted the clap to have impact when it hits with the snares in the chorus. I used the space designer reverb because this reverb is versatile and not difficult to use, plus, this reverb has loads of presets and different textures of rooms, halls, outdoors, etc. I pushed the sends until I could hear the reverb cutting through. From there I would stop pushing the sends and I would make changes to them if necessary.

The final thing I did is group the unprocessed drums and pcomped drums to one bus and control how much of the drums I want. I did this by pulling the faders down on the mixing desk and listened the drums with the song so that I know whether they are too loud or too quiet and I can use the drums master fader as the track is playing to push the drums until they are playing at a reasonable level.

Pad, Bass, Bells, Guitar & Sweep Effects - The first sound I focused on is the pad. I used EQ to take a look at it's frequency range and I noticed there unwanted lo end in the pad and I knew this would get in the way of the kick drums punching through. So I used a cut off band to get rid of the lo end and this will help the kick have it's own breathing space to punch through. I then used a hi shelf to boost the hi frequencies in the pad because I wanted to make the pad sound clearer, shinier and polished. I didn't boost too much of the hi end because boosting too much can cause the pad to sound piercing and too much hi end can drown out the other instruments. I used 2dB of the hi shelf band and this was enough to make the pad sound improved and polished. I didn't use compression or limiting because I wanted the pad's dynamics to have lots of breathing space in the mix and the channel wasn't clipping. After processing the pad I repeated the same steps for the bells and guitar.

For the bass I only used EQ because there wasn't enough lo end in the bass and it sounded weak. I wanted my bass to sound subby, warm and strong. I used a lo shelf band to boost the lo frequencies in the bass and get that strong subby sound. The EQ wasn't enough to make the bass as strong as I wanted it to be so I used the overdrive distortion and this distortion allows me to choose which frequency to distort and it helps improve harmonics in any instrument. An example would be using an overdrive distortion on a 808 to make the sine waves sound like square waves and square waves have plenty of harmonics.

I distorted the bass below 100Hz because 100Hz and below is where all the sub lives and this frequency area is the best area to make the bass sound fatter. I pushed the gain on the overdrive to 2dB and I used 2dB because putting too much distortion will make the bass sound very distorted and unpleasant to listen to. Using a little bit of distortion is a good way of making instruments sound fatter, crispier and stronger distortion is not a bad thing to use unless you are overdoing it.

I didn't process the sweep effects because I when I listened to them soloed I didn't notice anything wrong with them and the only thing I needed to do to them is balance them so they are not loud in the song. I pulled the faders down on the mixing desk and I listened to what was loud and what was quiet. I noticed the guitar, bass and bells were a bit too loud. I didn't want the guitar loud so I pulled the fader down until the guitar was sat in with the pad and bass and this helped the guitar not be the most dominating instrument. I wanted the bass and pad to be fairly loud because these instruments are the two main instruments in the song. However, the bass was a bit louder than the pad so I pulled the bass down by 1.5dB so that the bass is just behind the pad and that they are not fighting each other. I then made the bells 1dB louder than the guitar because I wanted the bells to be heard a bit more than the guitar but at the same time I wanted them to be sat well in the mix. Finally, I made the sweep effects sit in with the pad and drums because having these sound effects would make them the most overpowering sounds in the song, plus, the sound effects play all frequencies and this can cause the other instruments and sounds to drown out. So it was important to make the sweep effects sit in.

Track 4 - I mixed the 4th track in the recording studio because I know the recording studio never let's me down with what I am listening to. The studio's genelec monitors always help me keep track with what I am listening to and I'll exactly what to do if an instrument or sound is too loud, too quiet or needs improving. I bounced the track into individual stems and put them in a folder and labelled the folder appropriately with track's bpm like I did for the other tracks. Like I did for the other 3 tracks I imported the stems into logic and saved the logic into the audio drive with the audio files copied and this way I won't lose any of my work. I then pulled the faders down in logic so that I have 6dB of headroom in the output master channel and I can listen to the most dominant instruments or sounds in the song and then make some changes to them.

Drums - I parallel compressed the drums the same way I parallel compressed the drums on the other tracks because I know parallel compression is the easiest way of making drums or vocals louder and cut through the mix. I used the same amount of buses so I can use them for the same purposes like the other tracks and I set the compressor up the same way I set the compressor on track one because the way I set up the compressor made my drums have more punch and on this compressor I turned the output distortion to soft and this made my drums even louder and the soft distortion gave the snares and hats more crunch and sizzle and it made kick sound more thumpy as well. As I was using the sends on the drums I made sure the kick and snares were the loudest thing in song and the hats laid back because the kick and snares don't have harsh piercing frequencies whereas the hats have a lot of hi end and pushing hi end instruments too much can make the mix sound very piercing. So I made sure that the kick and snares are the loudest and most heard drum instruments in the song and the rest the drums not as loud.

I then put a send on the clap and this send is going into the space designer reverb. I put reverb on the clap because I wanted the clap to have impact when it hits with the snares and I wanted the clap to fit in with the sound of the pad. I pushed the send until I could hear the reverb playing and when I heard the reverb playing with the clap I stopped pushing the send. One of the snares I used in this song were sampled reverb and that means there was no need to add more.

Pads, Sub Bass, Lead, FX & Sweep - The first instrument I mixed is the pad. I used EQ first and I cut off some of the lo end because the lo end wasn't needed in the pad and if this lo end wasn't cut off it would have gotten in the way of the kick and sub bass cutting through. By cutting off the lo end in the pad it helped the kick and sub bass have their own space to punch through, plus, the sub bass will providing the body and warmth of the track. I then used a hi shelf to boost some of the hi frequencies to give the pad a brighter and polished sound. I didn't want my pad to sound dull and faint and the hi shelf helped not only the clarity but the tone as well. I didn't use compression or limiting on the pad because I wanted the dynamics of the pad to have plenty of breathing space and the channel wasn't clipping and this means there was no need to put a limiter on it. I then repeated the same steps to mix the other pad and lead.

I listened to the sub bass soloed and it sounded strong and subby and this meant it didn't need any EQ'ing, distorting, compressing or limiting. The sub bass sounded just right without processing it. I then listened to the sub bass with the other instruments and sounds and the only thing I had to do was push the sub bass up because it was too laid back in the track and the sub bass is the key instrument that provides the body of the track. So I used my ears as I was pushing the fader up so I can listen for the sub bass cutting through and as soon as I could hear the sub bass cutting through I stopped pushing the fader and left it as that.

I didn't process the fx and sweep because I wanted to keep the sound and tone of these 2 instruments the same. I wanted to keep them the same because as I listened to them un-soloed their tones added texture and colour and it's a sound I wanted to keep for this song and not make any changes to them. I did notice the sweep was a bit too loud and the sweep didn't need to be so loud because the sweep is playing every frequency, so I pulled the fader on the sweep until I could hear the sweep playing sat in with the drums mostly.

Track 5 - I mixed the 5th track in the recording studio as I know the recording studio is a reliable place to mix in. I bounced the song into individual stems and put the stems on logic with the tempo set correctly and like the other tracks, I named and saved the 5th track with audio files copied into the audio drive. After doing that I pulled the faders down on logic so that there is 6dB of headroom in the output master channel.

Drums - I parallel compressed the drums the same way I parallel compressed the drums on track 1 and I used the same amount of buses for the drums because they were going to be used for the same purposes like the other track and I know parallel compression will help my drums become louder and cut through the mix sharply. I set up the compressor the same way I set up the compressor on track 1 because I wanted the same sound for my drums and setting up the compressor this way will help with shaping the drums and making them knock more.

I only sent a bit of the toms, kick and crash because I can push the other drum instruments using their faders. I sent a bit of the kick, toms and crash because these three instruments were not cutting through the mix enough and they sounded too quiet and laid back. The best way I knew these instruments were cutting through is by pulling the faders down on the mixing desk and listening to what I can hear the most. I then pushed the three sends until I could hear them cutting through sharply and at this moment I stopped pushing the sends. I used EQ after compression because I wanted to push the drums even more. The kick didn't have enough lo end so I used a lo shelf band to boost the lo end in the kick and used a parametric band to make the transient of the kick knock more.

After EQ'ing the drums I made sure the open and closed hats were sat in with the kick and snare making sure they are not the loudest/dominant instruments in the mix. I only wanted the kick and snare to be the loudest instruments because these instruments are the main instruments of the song and it was important for to make sure they are not clashing with other instruments.

Grand Piano, Electric Piano, Pad, Bass, Lead & Bells - The first instrument I focused on is the grand piano. I used EQ first because I wanted to look at the piano's frequency range and make decisions on what can be improved. There was a bit of unwanted lo end in the piano and this would've clashed with the kick. So I cut off this lo end and I know this will help the kick have it's own space to punch through. As I was EQ'ing the piano I listened very carefully to the piano because if I didn't want to cut off too much of the piano. This would've made the piano sound thin and this isn't the sound I want for the piano. I then used a hi shelf band to brighten the piano because the piano didn't have enough shine and I wanted the piano to sound shiny and polished, plus, have a better tone. After EQ'ing the grand piano I didn't use compression or limiting and this is because I wanted the dynamics of the grand piano remain the same and have plenty of breathing space and the channel wasn't clipping. I EQ'd the electric piano, pad, lead and bells the same way I EQ'd the grand piano but without using a cut off band on the electric piano, lead and bells only. I didn't use a cut off band on these instruments because there was no lo end in these instruments to cut off and this didn't cause clashing with the drums.

The bass didn't have enough lo end and it sounded very weak. This isn't the sound I wanted for my bass. I wanted my bass to sound strong and subby. So the first thing I did is open a EQ and boost some of the lo end using a lo shelf band. I then used overdrive distortion and selected the frequency I wanted to distort and this helped make the bass sound fatter. Finally the last thing I did is I used the amp designer and I used the 'blues blaster preset' and this helped the bas sound even more fatter, stronger, subbier and warm.

I used a reverb send on the electric piano and I automated it at the very end of the song. I did this because I wanted the electric piano to have a tail and I didn't want the electric piano to reach zero fast. By adding reverb to the electric piano I was able to make the electric piano fade out slower and give a proper ending to the song. The reverb I used is the space designer.

After processing these instruments I made sure they were balanced. Before balancing them I made decisions about what should be loud and should be laid back. I wanted the grand piano, electric piano and bass to be heard mostly because the grand and electric piano and the main chord progressions in the song and the bass is going to be providing the body and warmth of the song. I wanted the pad to be sat in with the electric piano because I wanted the electric piano to still be heard mostly and the pad will play as a backing instrument for the electric piano. Finally I wanted the lead and bells to be laid back well. What I mean by this is I don't want the lead and bells to sound distant and far away. I wanted these two instruments to be heard but I didn't want them to sound very loud. However, I wanted the bells to be a dB louder than the lead because the bells have a subtle sound and this allows me to push them higher in the final mix.

Track 6 - I mixed the final track in the recording studio like I did for the other tracks because I know the recording studio is a unique environment to do mostly mixing and mastering and the studio's monitors are always accurate. I didn't need to bounce stems because I produced the final track using logic and I put the logic project onto my memory stick so that when I'm in the recording studio I can save the project onto the audio drive.

Drums - I used parallel compression for the drums because I know parallel compression is the best way to make my drums cut through the mix and it helps the drums pump harder. I used the same amount of buses because they were going to be used for the same purposes like the drums on the other tracks. After setting up the buses I set up the compressor the same way I set up the compressor on track 1 because I know setting up the compressor this way will help the shaping and dynamics of the drums, plus, the transients of the drums will cut through and not be compressed because it's important the transients always cut through or the dynamics will have been taken out.

I then pushed the sends on the drums listening to what I want to hear the most and that will be the kick and snare. I stopped pushing the sends after I could hear the kick and snare cutting through, however, I noticed the hats were too loud. So I used the sends on the hats and turned them down until they were sat in with the kick and snare. Making the hats too loud will make the track's frequencies unbalanced and it's important for the track's frequencies to be balanced and it's important that each instrument is playing at a satisfaction level. I then used EQ on the snare and hats only because I wanted to push the transient of the snare even more so that the snare knocks more and I wanted the hats to have more sizzle and crispiness. So I used a parametric band to make the transient of the snare knock and I used a hi shelf band to raise the frequencies of the hats and make them have more sizzle. I didn't EQ the kick because the kick sounded punchy and thumpy and this meant there was no need to push it harder.

Electric Piano, Strings, Bells & Bass - I listened to the electric piano soloed to decide what I can do to improve the tone. As I kept listening to the electric piano I didn't notice anything wrong with how it sounded but I did improved the clarity and I used a hi shelf to do this. The electric piano with the hi shelf boosted sounded much more warmer, colourful and shiny. I didn't use compression or limiting because I wanted the dynamics to flow and have plenty of breathing space and the channel wasn't clipping. I listened to the electric piano with the rest of the instruments and the electric piano sounded distant but I plan on balancing every instrument after processing them because after processing each instrument and bringing the best out of them I can balance more accurately.

I then repeated the same steps to improve the sound and tone of the strings and bells because I know this method of processing helped improve the clarity and tone of the electric piano and it will do the exact same thing to the strings and bells. I didn't process the bass because the bass I used in this track has a strong lo end and this bass provides a lot of body and warmth. So there was need to add more processing.

The last thing to do was to balance all the instruments and sounds. I did this by pulling the faders down on the mixing desk. I did this because I won't expose my ears so much and balancing everything this way is more accurate and precise. I made the kick, snare and electric piano the loudest instruments because the kick and snare provide most of the rhythm and the electric piano is the main chord progression of the song. I then made the open and closed hats sit in with the kick and snare ensuring that they are not the loudest instruments in the mix. I made the snare fill 1dB louder than the hats because I wanted the snare fill to punch through more and the snare fill plays a key part in the song and that is introducing the next section of the song. I then made the bells roughly the same level as the hats because the bells were a bit loud and I didn't want the bells to be over-powering. I wanted the bells to sit at a reasonable level a where they can still be heard but not very dominant. Finally I made the strings 1dB less than the electric piano because the strings are the harmony of the song and with the strings sounding distant the song's harmony will sound like it's not there. So I made sure the strings were loud but not louder than the electric piano.

Thursday 2 April 2015

Synth and Sampler Demo

Pad - I used the ES2 synth to make the pad. I used this synth because it has useful controls such as the octave knob, which can be used to transpose one oscillator an octave higher or lower. This is really good for adding more harmonics or sub to any sound because when the pitch goes higher the frequencies go higher to and when the pitch goes lower the frequencies go lower too. This plugin also contains a triangular mixer. This is another useful function of the plugin because once I have selected all the waveforms I want use I can use the triangular mixer to control the amount of each oscillator I want and it will help me with finding the right sound I am listening for.

I used two sine waves on oscillators 1 & 2 and I used the orgn 3 wave on oscillator 3. I then transposed the first oscillator an octave higher. I used these oscillators because sine waves is just a pure signal and sine waves are really useful for providing depth and warmth to any sound. I did transpose the first oscillator an octave higher because I didn't want the first oscillator to clash with the second oscillator and by transposing the first oscillator by one octave it helped the pad have more harmonics. I wanted the pad I was synthesising to have harmonics and not sound like a pure signal.

I used the orgn 3 waveform because this waveform adding more harmonics and brightness to the pad I was synthesising. However, this waveform was making the pad sound too harsh. So what I did was I used a lo pass filter that is located in the centre of the plugin. I used the lo pass filter to cut off some of the harmonics because so that the pad isn't so harsh sounding. I also used a parametric EQ to cut off a tiny bit more harmonics because it was still sounding a bit harsh and tinny. I used a gentle roll off because a gentle roll off is more accurate at filtering out hi end frequencies and I wanted to make the pad sound as subtle possible.

I used a phaser effect that is located on the right hand side of the ES2 and I used this effect so that I can give the pad a stereo image. I wanted the pad to have a stereo image because pads sound wider when they have stereo image and if this pad didn't have a stereo image it would've sounded like mono and I wanted the pad I was synthesising to sound wide and ambient. I then turned the intensity up so that I can hear the phaser working with the pad. If the intensity wasn't up I wouldn't have been able to hear the phaser working and the pad wouldn't have that stereo image. I also adjusted how intensity I wanted to use. Too much intensity would've made the pad sound muddy and this isn't the sound I wanted to hear as I was synthesising the pad.

I used the lo pass filters that are located in the centre of the of the ES2. I used these filters because the pad had a harsh piercing sound and it didn't sound very pleasant to listen to. I adjusted the blend level so that it's central and this way I can use the two filters equally. I didn't want to cut off too much because if I cut off too much I would've taken out the harmonics in the pad. The harmonics is what makes the pad sound bright and stand out. The picture below is the evidence to show what I did with the ES2 to synthesise the pad.


Bass - I used the ES2 plugin to make the bass because the ES2 is really good for transposing octave higher or lower and this benefits more sub or harmonics. Most bass sounds that are synthesised normally use sine waves and sine waves are just pure signals. There is no harmonics in them whatsoever. The ES2 also has distortion control and the distortion control will add harmonics to the bass and the bass will sound fatter. Another good thing about the ES2 is there are three oscillators and three oscillators are good combining waveforms and making a synthesised sound more harmonic and polished. 

I used a sine wave on oscillator one, a '00' waveform on oscillator two and the '35' waveform on oscillator three. I then transposed oscillator one and three down an octave. I chose these waveforms because these waveforms were good for giving the bass a strong lo end and harmonics. The transposing of the octaves on oscillators one and three are the key functions that gave the bass that strong lo end. By transposing oscillators one and three the frequencies on those oscillators changed and went lower because the pitch is going lower. 

I then used the distortion on the ES2 because the distortion is a brilliant function that makes mostly lo end instruments sound fatter and stronger. I turned the distortion to full because while I was putting distortion it didn't distort horribly and that meant I can keep pushing the distortion until I hear the bass distorting horribly. From there I would turn the distortion down a tiny and this will make the bass sound stronger and fatter. The bass will also have a gentle distortion.

It is really important to be careful with distortion because too much distortion will overpower the bass and will clip horribly. Distortion is stronger than a compressor because the distortion makes the clipping level become a brick wall and by overusing the distortion the instrument or sound will overpower and clip because the distortion is holding too much. I the used the sine level function that's located on the right hand side of the ES2 and what this function does is it makes only the sine waves louder. I used this function because the I wanted the bass to be quiet loud but not too loud. If the bass was too loud it would've caused the output master to clip and distort. So I only put turned the sine level to 50%.

I then used the triangular mixer that's located next to oscillator two and I used my ears to listen to the sound I wanted for making the bass. I used this triangular mixer because this function is really good for combining all three oscillators and listening to for the right sound. I used my ears because I rely on my ears more than looking at the screen. Looking at the screen can deceitful and ears are more trustworthy. From there I used the ADSR on envelope three. I turned the attack and decay down because I wanted the bass to play straight away rather than wait a long and I turned the decay down because I didn't want the decay to gradually reach the sustain level. What I did instead is I let the attack go straight into the release so that I can hold the bass for as long as possible without the bass reaching it's quietest point. 

I then used a small amount of sustain and a short release. I used a small amount of sustain because I wanted to make the bass last for a period of time. If the sustain was too short the bass will have stopped playing after a few seconds. Another reason why I used a small amount of sustain is because if I were to make a bassline using the bass I synthesised the sustain will play a key role in ensuring the bass doesn't stop while I'm playing and recording a bassline. I used a short release because if the bass had a really long releases the bass will take a long time to reach zero and the bass doesn't need a long release because the bass is a dry instrument and the bass doesn't have or need a audio tail. 

The last thing I did was use an EQ to take out some of the harmonics in the bass. I did this because the harmonics in the bass sounded very harsh. I used a not too steep lo pass filter to filter out those unwanted frequencies. After doing this the bass sounded much subtler and warmer and there a tiny bit of harmonics are still hearable in the bass. The EQ also helped with making the bass more subby and strong. The picture below is the evidence to show what I did with the ES2 & EQ to synthesise and equalize the bass. 


Lead - I used the ES P synth to make the lead because the ES P is a synth that is straight forward to use. What I mean by this is there aren't many controls and most of the controls are easy to configure. On the left hand side you can control the waveforms using faders rather then selecting them from the ES2 plugin. There is only one ADSR envelope on the right hand and it isn't difficult to use because you're only using one envelope. There is also chorus and overdrive functions on the left hand side and these two functions are really good for adding effects and harmonics to the lead and leads wouldn't sound great without effects and harmonics. Another good thing about this synth is there is a frequency function located in the centre of the plugin. This function is good for the lead's frequency range. You can adjust how much frequency you want the lead to have and this is a good way of giving the lead harmonics. 

The first thing I did to make the lead was select the waveforms I wanted to use. I used a triangle, sawtooth and sub-oscillator -1 octave slider. I used these waveforms because these by using more than one waveform I'm making the lead sound harmonic and the lead will have a big frequency range. I used more of the triangle waveform because the triangle waveform has some harmonics that made the lead sound electronic but the triangle waveform wasn't enough. This is where the sawtooth waveform comes in. I used a tiny bit of the sawtooth waveform because the sawtooth waveform a has a bigger frequency range and more harmonics than the triangle waveform. By combining these two waveforms together I managed to synthesize a lead that has a harmonic sound and big frequency range.

The last waveform I used is the sub-oscillator -1 octave. I used a tiny bit of this waveform because this waveform changed the sound of the lead an octave lower. I didn't want the lead to sound lower because the frequency range will have change if I put too much of sub-oscillator -1 octave. I only used a tiny amount of it because I did want the frequency range to go down but not by much. I didn't want the lead to lose it's harmonics as well as frequency range because the harmonics is what makes the lead sound bright and polished and with the lead an octave lower all of that bright and polished sound would've disappeared. 

After using the waveform I used number on the left hand side of the oscillator. The numbers are show high or low you want the octave to go. 16 is the lowest octave and 4 is the highest octave. I chose 4 because 4 is the highest octave and a higher octave was a really good number for me to use. Number 4 helped me with making the lead's frequency range bigger and a bigger frequency range means the lead sounds more crisp. 8 and 16 would've taken that crispiness away and would've given the lead some lo end and this isn't the sound I wanted to hear as I was synthesising the lead. I wanted this lead to have a high frequency range because a lead with a high frequency range can punch through the mix of a song and can be heard clearly without any clashing.

The next thing I used to make the lead is the frequency function that is located in the centre of the ES P. I used this function because this function allows me to control the width of the frequency range. By using this control I was able to control the width of the frequency range and I used my ears to what sounded subtle and what sounded harsh. I relied on my ears more than my eyes because looking at the screen can confuse what your ears are listening to and the ears are reliable and trustworthy at what sounds good and bad.

I used a lot of chorus and tiny bit of overdrive in the lead because the chorus effect gives the lead a stereo image and I wanted the lead sound like stereo because stereo will make the lead sound wider and louder whereas in mono the lead would've sounded central and wouldn't fill the empty space. I also used a little bit of overdrive because I wanted the lead to have some sizzle and fizz. I didn't use a lot because using a lot of overdrive will distort the lead and the distortion wouldn't sound pleasant to listen to. So by using a little bit of distortion the lead sounded a bit more different because the overdrive is making the lead sizzle and fizz.

I then used the ADSR envelope to configure how the lead should be played. I used a sharp attack because I wanted the lead to play straight away after hitting a key on the midi keyboard. I didn't want to wait a few seconds for the sound to come in because with a slow attack I wouldn't be able to make a melody with the lead I synthesised. I kept the attack at 50% because as I didn't want the peak to change when the decay goes into the sustain. I wanted the decay to stay at 50% because it will be easier for me to make a melody with the lead without the volume changing. I didn't want the volume of the lead changing constantly because I synthesised this lead to make a melody for the demo track and the lead wouldn't sound like a lead if the volume levels kept changing. 

I used very little sustain for the lead because I didn't want the lead to last very long. I didn't want the lead to last ling because the lead I synthesised has a lot of hi end and a lot of hi end isn't pleasant to listen to for a long period of time. If the sustain had been put to full the lead wouldn't be pleasant to listen to because of the harsh and piercing hi end. So this is the reason why I used a short sustain. The last thing I used is the release. I wanted very little release because I wanted the lead to stop playing as soon a I let go of a key. With a long release the lead would've taken a few seconds to stop playing and if the lead was playing a melody the notes would've clashed together. So this is why I used very little release so that the notes don't clash and that the lead stops playing straight after playing a melody or not being held by a key. 

The next thing is I put reverb on a bus and I used the bus to control how much reverb the lead should have. I put some reverb on the lead because the lead sounded dry and I wanted the lead to sound wetter. I put some reverb on a bus because putting effects on a bus is easier to control instead of placing the effect on the channel. By using effects on a bus it allows you to control how much of the effect you want and with ease. This is why I put reverb on a bus for the lead, so that I can control how much reverb I wanted for the lead. If I put too much reverb sound of the lead would drown out and would drown out the rest of the instruments and this is why I used very little reverb. 

The last thing I did was EQ the lead. I EQ'd the lead because there was unwanted lo end and too much hi end. So I used the cut off bands to get rid of the lo end and some of the hi end. I also reduced the body of the lead because the spikes in the lead were too big and this made the lead sound a little piercing. So I used a parametric band to reduce those spikes and make the lead sound more subtle. EQ'ing the lead made a difference to how the lead sounded without the EQ. The lead sounded more improved with very little hi end mostly because there was too much hi end in the lead and some of the hi end needed to be cut off because it sounded very harsh and by cutting off the unwanted lo end all the lo end instruments can punch through the mix. The picture below is the evidence to show what I used to make the lead and what I did to improve it.


Electronic and Acoustic Drum Kits - For making the electronic and acoustic drum kits I took some acoustic and electronic one shot drum samples from royalty free drum packs and the first thing I did is I opened the exs24 synth, clicked edit and dragged the acoustic samples first. I didn't need to assign the key because the drums are not pitched sounds and they don't need a specific key to play on. I then assigned each drum to have their own individual key so that all of the drums don't play on one key. The drums were labelled appropriately so I know which instrument is which. The next thing I did is save the drum kit so that all the drum samples are copied onto logic and the drum kit itself is saved. I then repeated the same steps for making the electronic drum kit to make the electronic drum kit. 

I used the exs24 to make the drum kits because the exs24 has a sample editor that allows you to drag and save your samples and it comes in really handy when making drums for a song. You can also put pitched sounds in the exs24 and the way you would do this is by finding a sound you like such as a pad, using one note and making that note last 8 - 16 bars and then bounce it as a wav and drag the wav onto the sample editor of the exs24 and assigning the note to the correct key. After doing this save it the sampler instrument appropriately with the sample copied onto logic or otherwise you will lose your samples. After doing all this you can make a chord progression out of that sampled pad. 

The exs24 has good functions such as pitch, pan, velocity, key range, reverse, one shot, etc. The reverse function is really useful for reversing sampled drums or chord progressions. By clicking reverse and playing a sampled sound or drum the reverse adds a really cool effect and this can be useful for a fill in or drop. The one shot plays the full sample so it doesn't matter how fast you take your finger of the key. It will still play the full sample. The pitch is a really useful function for playing a sample on different keys. The first thing you would do is adjust the key range of the sample. So from C3 - C5 is a good key range for the sample and the next thing you would do is tick off the pitch function and from there you can play a sampled instrument or sound from C3 - C5 and you would notice the pitches changing. These images show what I did to make the electronic and acoustic drum kits.


Multi Sample Instrument - I used bells to make the multi sample instrument and what I did is made a midi recording on fl studio with the bells I chose to use and I played these notes on three octaves. I then bounced the recording as a wav and I put it onto logic. The next thing I did is strip silence the bells. I pressed 'ctrl + x' because that is the short cut for strip silence and I adjusted the threshold only because the other functions such as 'minimum time to accept as silence' 'pre attack-time' 'post release-time' were fine as they were. They didn't need changing. After strip silencing the the wav file I created 11 new audio channels and I put each strip silenced bell in the channels and I soloed each one of them and bounced them wav files and while bouncing them as wavs I named each bell with the key they are in. I named the bells this way because when I import the bells onto the exs24's sampler the sampler will automatically assign the key of the audio file by reading the reading the name of the audio file and it's a quicker method rather than assigning the audio files manually. 

After bouncing all the stripe silence bells I opened the exs24 on a different channel and opened the sampler. The next thing I did is I dragged all the bounced stripe silenced bells to the sampler and I clicked the first option which is "Auto map" by reading the root key from audio file and this option assigned all the wav files to their designated key. I then saved the multi sample instrument with all the samples copied onto logic so that I won't lose the multi sample instrument and samples that are in the sample editor. There were gaps in between the assigned notes and I didn't want the gaps to be there. So what I did is I extended each note to fill in those gaps and I made the multi sample instrument 3 octaves so that I can play more notes on different pitches. I also made sure the pitch function was ticked off on all the samples or I wouldn't be able to play the notes on the higher keys. The images below show what I did to make the multi sample instrument.



Sweep - I made a sweep effect using the ES P synth. I used the noise oscillator only because sweep fx is just white noise and it plays all the frequencies together. I used a sharp attack, long decay, long sustain and short release. I used a sharp attack because I wanted the sweep effect to come in straight away, I used a long decay so that the decay is at the same volume when it goes into the sustain, I used a long sustain so that the white noise can last for a lengthly period of time and I used short release so that the white noise stops after 1 - 2 seconds. The next thing I did is I used a little bit of overdrive and chorus. I used these two functions because I wanted the sweep effect to have some sizzle and a tiny bit of sound. The chorus provides the sound and without the chorus it will just be white noise. So this is why I used overdrive and chorus, to make the sweep effect sound more improved.

I then used a tiny bit of resonance and automated the frequency function. I used a tiny bit of resonance because I wanted the sweep to sound a little bit like a whistle as the frequency is being automated. The whistle sound from the resonance makes the sweep effect sound more natural and I wanted the sweep effect to sound a bit natural because natural sounds are really useful for making a track sound more real like they have been recorded live rather then in the box. The resonance really helped with how the sweep effect sounded compared to what it sounded like without the resonance. I then automated the frequency because the sweep effect wouldn't sound like a sweep without the automation. Without the automation the sweep will just play one frequency constantly and wouldn't sound like a sweep. So by automating the frequency the sweep effect played all the frequencies from lo to hi and it this sounded like a proper sweep effect. The image below shows what I did to make the sweep effect. 


Xylophone - I used the sculpture synth to make the xylophone because the sculpture synth has a material function that is to manipulate the sound you are synthesising. These materials are called nylon, wood, gloss and steel. This function is really useful for combining more than one material because combining more than one material helps layer the sound more and it helps develop a better tone for the sound. The pickup function that is located on the left hand side of the sculpture synth is another useful function this function is used for controlling how much pickup to sound should have. A lot of pick up would make the sound too resonant and notes would clash with each other. So using a little bit of pick would be useful for making the notes not clash together and having an audio tail.

I used wood only to make the xylophone because wood is a really good material for making the xylophone sound woody and plucky. Another reason why I used wood to make the xylophone is because wood isn't a harsh sound for the xylophone. The wood allows the xylophone to sound gentle and subtle. I didn't combine wood with another material because wood is the only sound I was going to use while making the xylophone and wood is a great material for making the xylophone sound real. 

I then used the pickup function to control how much pick up the xylophone to have. I didn't want too much pickup because too much pickup makes the sound of the xylophone sound harsh and resonant and too much pickup drowns out the original sound of the xylophone and causes notes to clash. So I adjusted the pickup to ensure that the xylophone doesn't sound dry and has an audio tail. This will make the xylophone's sound punch through in the mix and this won't make the xylophone's sound drown out. 

The last thing I did I turned the delay off. I turned the delay off because I didn't want the xylophone to have a effect when I played it. The delay doesn't make the xylophone sound genuine because the delay makes the xylophone bounce on and off and I wanted the xylophone to not have any effects or processing because I wanted the sound of the xylophone to simple but real as well. The xylophone was going to be used to layer the multi sample instrument because the xylophone's sound is fits in well with the multi sample instrument. The xylophone has a deep and bright tone and the xylophone sounds plucky as well. The image below shows what I did to make the xylophone.


Pluck -  I used the sculpture synth to make the pluck because the I like the materials in the sculpture synth and I can manipulate how the pluck should sound using the pickup function. I used fifty percent steel and fifty percent nylon and I used these materials because these materials allow the pluck to have a bright, shiny and plucky sound. I didn't want the pluck to sound dull and flat because the pluck wouldn't punch through in the mix. A bright sound for the pluck helped the pluck itself punch through in the mix and can be heard clearly. 

The next thing I did is I adjusted the pickup for the pluck. I didn't use too much pickup because too much pickup will make the pluck sound too resonating and will cause notes to clash with each other. Another reason why I didn't use too much pickup is because too much pickup would drown the pluck's sound and some of the other instruments in the track, plus, the pluck wouldn't punch through clearly in the mix. So I used very little pickup and this made the pluck sound less resonating and helped pluck not drown out itself or the other instruments in the track. 

The last thing I did is I turned the delay off. I turned the delay off because I didn't want the pluck bouncing around. I didn't want the pluck bouncing around because I wanted the pluck to sound simple without any processing in it. I also wanted to hear a tiny bit of plucking as I hit a key because if the pluck didn't have a plucky sound it wouldn't sound like a pluck and this pluck is going to be used an extra melody for the chorus to give the chorus texture. The image below shows what I did to make the pluck. 


Re-Sampling - I re-sampled the bass only because the bass didn't have rumble and it didn't sound fat. The bass did have some processing already and the EQ is the plugin that is doing the processing on the main channel the bass is in. I soloed and put a cycle around the bass and bounced it as a wav and from there I dragged the bass onto a new audio channel. From there I processed it some more and the first thing I used is the overdrive distortion. I used this distortion because I can choose which frequency to distort and I can adjust how much distortion I want. I put 6dB of distortion at 410hz because this area was very weak and is the reason why the bass wasn't fat and rumbly. The last thing I did is use the amp plugin. I used this plugin because this plugin gave the bass some warmth and it also worked with the distortion because the amp is making the sub of the bass more stronger and fatter. I used the blues blaster amp preset on the amp and this made this preset improved the bass a lot.

After processing the bass I soloed and put a cycle on the bass I've processing and I bounced it as a wav. From there I dragged the re-sampled bass onto a new audio channel and I compared the waveforms and there was a big difference with what the bass looked like before and what the bass looked like after. Re-Sampling was the best option for the bass because re-sampling a sound, instrument or melody that has already been processed can help get a much improved and developed sound for the instrument, sound, melody, bassline, etc. These images below show what I did to re-sample the bass.