Wednesday 17 December 2014

History of Electronic Music

Bibliography

http://juicy-m.com/about/

https://www.facebook.com/djjuicym/info?tab=page_info

https://soundcloud.com/dj-juicy-m

https://www.youtube.com/watch?feature=player_embedded&v=ZLu5b4Arl7U

Introduction

In this essay I have chosen to focus on Juicy M. She is a DJ & producer born in Kiev, Ukraine; her birth name is Marta Martinez. Another reason why I have chosen her is the skills and techniques she uses when she is behind the decks. The way she does her mixing on the decks shows how experienced and talented she is. The genres of music she specializes in is mostly electronic.

Biography

Juicy M began her DJ career in 2007, where she would DJ at popular night clubs in Ukraine such as Orangerea Supper Club, Patipa and Matrix. In that time she would play Hip Hop & RnB. After a few years she participated in the DMC World Eliminations in Ukraine and World Technical DJ Championship in Dubai and was named 'Best female DJ of the year' in ProDJ Awards in Ukraine, 2010. From 2011-2012 she played at international gigs as a Hip Hop and RnB DJ and in that time she opened acts for many great artists such as Black Eyed Peas, Jay Sean, Afrojack and many more. As of 2014 she is the owner of JUMMP Records and she is well known for mixing without headphones.

Techniques



This video is a good example of techniques Juicy M is using while she is doing her set (s). In this video she is using 4 Pioneer CDJs and a mixer. Another good thing about this video is that she is really getting into the music and this is what a DJ has to do because the DJ is there to entertain the audience. The equipment she is using is digital. At 1:12 I really like the she back spinned one of the tracks she was playing on deck A. What I like about this is as the drop of the next track is about to come in she adds more tension before the beat drops on deck B. At 1:30 I notice she is playing the same record on deck B and what she is doing is she is is using the faders to play different sections of the record that is being played on both turntables and I really like the way she is keeping in time with the beat.

When DJ'ing came into the scene a long time ago, the turntables were analog. This made DJ'ing a bit more difficult to overcome because DJs found it difficult to find their cue points on the record and DJs had to use the BPM leveller until the tempo of the first turntable was synced up with the record playing the second turntable. DJs also used vinyl records in that time because digital software and hardware wasn't around at that time. Another bad thing about vinyl records is that they would get damaged easily due to the DJ scratching the record too much. Finally, the software in today's world didn't exist when DJ'ing came into the music scene and this made DJ'ing even more difficult because the DJ found it hard to keep track of where he/she is at during his/her mix.

DJs would place a tiny but of sticky tape on the vinyl record to remember where the cue point (s) on the record are and it was also important for the DJ to remember the tempo of the records he/she is playing are about to play or otherwise the DJ will make more than a few errors while on the decks.

Discography

Juicy M's remix of 'Watch Out For This' by Major Lazer gained over 7 million views and was supported by DJs such as Calvin Harris, Laidback Luke, Bob Sinclair and others as well. She hasn't released many records like other DJs and producers and she also hasn't released her own E.P or debut album. She is mostly touring around the world more than working in the studio. You can listen to her music and setlists directly from soundcloud or youtube and you can watch her live performances or her small mixing sessions.

Tuesday 9 December 2014

Project Plan

Introduction

As a producer the main things I like doing are producing, DJ'ing, composing, recording and a bit of sound engineering. One of my goals is to produce for as many artists as I can whether they are signed or unsigned. Another goal is to be a songwriter because I'm finding it difficult to write lyrics and is something I want to improve on. I like producing many genres of music. I don't like producing one genre because it gets boring after a while and I like to experiment with other genres too. If I were to be in the industry my roles would be mainly producing, DJ'ing, and composing and I would see myself touring and producing for artists that famous already.

Content

My product is going to be a 4 - 7 track E.P and the genres will mostly be RnB, ambient and soul. I have chosen this style of music because this kind of music is relaxing, gentle and subtle. I was thinking about making a house E.P but this idea was better for me. The people in my college that are studying music like this kind of music as well as I do and it will be a great opportunity for me to collaborate with them.

Collaboration/Studio Roles

I plan on working with the students in 1st and 2nd year performance because the performance have great singers and they will provide amazing vocals for my E.P. Before I start talking to the performance students about collaborating I will need to have a demo track for whoever I want to work with. This way it will help the student I am working with get an idea of what the instrumental track sounds like and has a good idea for lyrics. Also I wanted to make a track using 2 - 3 live instruments I can also talk to some of the instrumentalists in the performance class and explain to some of them what my final music product is going to be and how they will help a lot with it.

Production Teams

From wednesday 19th November - 3rd December me, Reggie and Boris have been recording Daniel James' track titled 'Home'. I was recording the last two sessions by myself. In these sessions all three of us have been recording Daniel in comping mode. Comping is audio recording with the cycle on and this technique was great for finding the best take for the track. In the first session all three of us managed to record the verses, in the second session I managed to record one chorus and copy it into the other chorus section and finally in the last session I managed to record the last section of the song which was the bridge. During these sessions it was important for us to keep track of what we were doing. Once a vocal take from a comp track was bounced or selected we needed to name it and give it a colour so that every section of the song is obvious to us. Another important thing was to save the file once a new recording has been taken or otherwise this would have given us problems.  Now that the track has been fully recorded it is time to get the track professionally mixed and mastered.

Equipment List

The equipment I will be using in my final product is:

FL Studio
Logic 9
Logic X
NT1000 Mic
AKG 414 Mic
D.I Box
I/O Box
Mixing Desk
Headphone Mixer
Headphones
XLR, line lead and jack to jack cables
Software synths, mixing and effects plugins

I will need FL Studio to compose and produce the tracks for my E.P because I have installed some great plugins which I can use for chords, melodies drums, bassline and more. Plus, I can structure the track neatly and keep everything tidy. However, I will use Logic 9 to make some of my tracks because Logic 9 has library of sounds, drums and software plugins to choose from. It will also be easier for me to record what I want because I will be using a midi keyboard. I don't have a midi keyboard at my house and this I find it difficult to record using the type pad on my laptop. Plus, I will be using Logic X to import my track stems ready for mixing and mastering because Logic X has great mixing and effects plugins. I will be using the recording studio to mix and master my tracks because the recording studio is a great environment for finalising any type of music. The recording studio allows me to hear every frequency clearly and it will help me it focus more when I am listening carefully to my mix downs.

When I record instruments or vocals for my E.P I will be using the NT1000 & AKG 414 microphones, D.I box, XLR, line lead and jack to jack cables, mixing desk and I/O box. I will use the NT1000 & AKG 414 microphones for vocals because they provide clear vocals, don't require much EQ'ing and give enough brightness to the vocals. I will use the microphone stand to place the microphone I am using, from there I will use one XLR cable to plug the microphone into the I/O box and from there I will use the mixing desk to turn the phantom power on and adjust the level (s). I will also use a line lead cable to plug the headphone mixer into the I/O box. After that I will set the headphones up so that the singer/rapper I am recording can hear the instrumental track and me communicating with him/her. I will then use Logic X to import the finished track ready for recording and I will ensure that the tempo is correct on the logic file. I will also create new audio channels for the vocalist ensure that the channels are going to channels 1 & 2 which is the stereo output.

For recording live instruments like guitar, bass and keyboard I will use the D.I box (s), jack to jack, line lead and XLR cables to link up all the instruments and headphone mixer into the I.O box. I will then open a new logic file on Logic X and I will create new audio channels and make sure all the inputs are the same on the I/O box and I will ensure that they are coming out of channels 1 & 2 which is the stereo output. From here I will use the mixing desk to turn the phantom power on for the D.I box (s) and I will adjust levels for the instruments I am using.

Future Plans

For the future I will send 3 - 4 of my E.P tracks to DJs that are looking for new music from up coming artists. This will help me get recognition from people that work in the A&R department of record labels. Creating an official website is another good way of getting recognised because people that work in the A&R department can look at my biography and listen to music I have produced now and before. Social network sites like facebook and twitter will help me make contacts with other unsigned artists and producers and it will probably help me with getting in touch with people that work for events management and A&R. Getting in touch with events management people will help me showcase music that I have produced and it will attract a lot of people's attention and this will help my fan base grow. Plus, it will hopefully attract an A&R employee that is looking for new artists to sign.

Friday 5 December 2014

Beginners Guide to Synthesis

Introduction

A synth is a software instrument that contains preset sounds and drums. A software synth also contains other controls such as waveforms, oscillators, LFOs, HP, LP and BP filters, ADSR envelope, etc. These controls are used for sound synthesis; creating sounds from default and making great effects with them.

Sound synthesis is a great way for a producer to create a sound that he/she wants if it can't be found in the preset library. However, synthesising sounds is a great way to make music sound much more professional and realistic. Not all songs need preset sounds because synthesised sounds are great to use in tracks of any genre as well. Synthesised sounds are great for EDM, Deep House, DnB and Dubstep.

Acoustic Theory

When an object is hit with another object is creates a sound. Once the sound has been created it travels through the air particles and from there the sound is picked up by the pinna and travels through the air canal. Finally the sound hits the ear drum causes the little hairs in the cochlea to react. The hairs vibrate because there is sound being picked up. However, there is too much sound being picked this can have a serious impact on the three little ear bones and the cochlea. For example if someone were listening to music in headphones with the volume to full, that person's hearing will not be as good as it was before. Everything that the person can hear will sound dull and faint. This is because the three little ear bones are getting to the point where they will be damaged and once they are damaged it will be permanent and this leads to loss of hearing. Tinnitus occurs when the cochlea is permanently damaged. Tinnitus is an annoying ringing sound that you can hear in mostly quiet environments and the only way someone will not hear tinnitus is if that person is a loud environment depending on how bad that person's tinnitus is.

Looking after you're ears is very important, especially for a producer, sound engineer or technician. The best way you can look after you're ears is by putting the volume low when listening to music in headphones, wearing good quality ear plugs at live events, clubs and parties and giving you're ears a break from time to time ensuring you're ears have time to refresh themselves and relax. Doing all of these things will help you prevent hearing loss and this will help benefit anyone that is a producer, sound engineer or technician.

Frequency

The next thing I will talk about is frequency. Frequencies are measured in hertz (Hz) and show us where each sound and instrument live in a record. Every record is made up of lots of different frequencies and frequencies can be heard from 20 - 20,000 kHz. Frequencies that are 200 Hz or below is where all the deep sounds and instruments live such as kick drums, bass and sub bass. Frequencies that are 200 - 400 Hz is where the you will hear more body of the lo instruments. From 400 - 1 kHz is where all the lo/mid sound lives. Instruments such as guitars and vocals will be in this frequency range. From 1 - 3 kHz you will hear more of the instruments. You will also hear a little bit of the hats and crash cymbals. 4 - 8 kHz is where the hi-mid instruments live but mostly the body of hats, crash cymbals, ride cymbals, etc. You will also be able to hear the top end of most of the instruments. 8 - 10 kHz is where all the hi end instruments such as ride cymbals, hats, crash cymbals, bells and top end of the vocals. Finally, 10 kHz and above is where all the air and hissing lives.

This can be very useful for a mixing and mastering sound engineer because when the engineer listens carefully to the track that is being mixed or mastered, the engineer will know exactly what to do. The multimeter plugin in logic is another good way of looking at different frequencies. The multimeter will be very helpful to a sound engineer because the engineer will know which frequencies to cut off, shelved out or boosted in any instrument or sound that needs it.

Fundamentals of synthesis

Waveforms and ADSR

When it comes to creating sounds there are aspects that need to be learnt before doing the actual creation. For example a producer needs to know the waveforms that are used in synthesis. Those waveforms are sine, square, triangle and sawtooth. A sine wave is great for making bass sounds, however a sine wave doesn't have any harmonics. Sine waves would be around the 44 to 86 Hz zone. Furthermore if a producer wanted some harmonics in the a bass, he could add a sawtooth, triangle or square wave depending on how he wants the bass to sound like. Logic pro 9 has a synth called ESP1 which is great for creating deep house basses.

Another thing you will find on a software synth is the ADSR envelope. ADSR stands for attack, decay, sustain and release. The attack is the time taken for the sound to reach it's peak, the decay is the time taken for the run down from the attack level to it's sustain level, the sustain is the duration of the sound's level until the key is released and finally the release level is the time taken for the sustain level to reach zero after the key is released.

If I were to sound design and use a pad using this envelope I would select the waveforms I would want to use to create the pad and from there I would use the ADSR envelope to configure the control of the created pad. I would put a sharp attack because pads with a sharp attack can be heard easier and you won't have to wait a certain period of time for the pad to reach your ears.

I would then use a short decay because long decays are not really my preference when it come to making pads because the pad will take longer to reach sustain level. I like my decays to be nice and short because this allows my pad to reach it's sustain level much quicker. It's better then waiting for a long period of time.

From there I would use a long sustain because this allows the pad to be heard for a unlimited period of time. Most pads usually have long sustain and a pad with a short sustain would not sound like a pad at all. It would sound like a stab or a pluck. This is the reason why I like having long sustains on my pads because pads with long sustains give a lot of ambience and harmonics and I can play the pad for a unlimited amount of time without letting go of the key.

Finally I would use a sustain that is not too long and not too short. What I mean by that is I would play around with the sustain controller until I found a sustain that is just right. I would first create a loop on logic with the created pad and from there I would use the release controller to adjust how much release I want. Once I found a release I am satisfied with I can just leave the ADSR envelope as it is to show that I have synthesised a pad and used the ADSR envelope to configure the control of it.

Filters

Filters is another feature you will find on a software synth. There are three types of filters. They are called lo pass, hi pass and band pass. The hi pass filter only allows sounds and instruments that have hi frequencies to pass, the lo pass filter only allows sounds and instruments that have lo frequencies to pass and the band pass filter is a filter that sweeps both frequencies together.

A lo pass filter is great for creating white noise sweeps in EDM music. I can easily create a white noise sound with ES2 synth by selecting the noise waveform on all three oscillators and from there I can record 4 - 8 bars of white noise and finally I can select the lo pass filter and automate it from there to create a perfect white noise sweep. A lo pass filter is also great for ambient pads. When a pad has a lo pass filter it takes out all the harmonics in the pad gives a lot of depth and warmth. This is preferably good for EDM, RnB and ambient music.

A hi pass filter is great for snare drum fills. If I had recorded a 8 bar snare drum fill in an EDM track I would add and automate a hi pass filter. I would do this because This shows that the drop is ready to come in. Without the hi pass filter it doesn't create much awareness of what is about to happen next. the best place to automate the hi pass filter is around the last two bar or last bar of the snare drum fill because this where the beat will about to drop.

Another thing I would use a hi pass filter for is giving a bass more harmonics. If I was using a bass that didn't have much harmonics I would use the hi pass filter to adjust the harmonics in the bass. A hi pass filter would definitely make a big difference in how the bass with more harmonics rather than pure sine wave bass because when I compare a bass with harmonics and a bass without harmonics, a bass would doesn't always have to sound deep. When listening to a bass in EDM music you can tell that there is a lot of harmonics used and it sounds so much better than a bass without harmonics. A bass without or very little harmonics is also good as well. Basses like these can be used in house, hip hop, RnB and pop. You can use these bass sounds as 808s, sub bass or deep bass.

LFOs

LFO stands for low frequency oscillator. LFOs are usually used for dubstep and dnb basses and rising sound effects. What an LFO does is it creates a wobble vibrato, the LFO will be heard more from 200 hz and below as it is a low frequency. LFOs is a great thing to use in music, especially when making sound effects. A producer can synthesise a sound and use an LFO to create a wobble rather than using one shot samples.



This piece of music is a good example of what an LFO does and what it sounds like. At 1:19 of the song 'Cinema' I can hear an LFO being triggered. The LFO is making the bass and some another sounds wobble and you can hear different dynamics in the sounds used. This can be a very useful technique for sound synthesis because it will give someone a great idea in his/her head for what you want his/her created sound to sound like and it will help with new upcoming tracks that any producer will make in the future.

Oscillators

Oscillators are controls found on software and hardware plugins. Using more than one oscillator will allow the synthesised sound to sound fatter and more layered. The ES2 synth on logic has three oscillators and each oscillator has every waveform, octave knob, a triangular mixer, glide and tune switch. Synthesising a sound using more than one oscillator is very useful because using one oscillator alone doesn't make a created synth sound very catchy. Using more than one oscillator allows a synth to have more harmonics and a little bit of lo end. Using the triangular mixer will allow you to control how much of each oscillator you want. Creating synths using more than one oscillator is much better than using preset sounds because if you can't find a sound you are looking for in the plugin's library you can use sound synthesis to create the sound you are listening for. When creating a sound that you want the most it is important to use you're ears and listen closely to what sound you are trying to create.


Types of synthesis

Additive synthesis - This type of synthesis allows someone to create a sound made out if sine waves of different frequencies. A sine wave has no harmonics, it's just one tone. If I wanted to make a sound using additive synthesis, I would use the EXS24 synth from logic and from there I would go into the edit setting and add 5 - 10 sine waves. During this procedure I must ensure that the note of every sine wave I insert in the edit function is different. If 10 sine waves are playing the same key there won't be any harmonics, it will just be one frequency. By changing the note on every sine wave you use this means more harmonics can be heard and seen on the multimeter.

Subtractive synthesis - This type of synthesis allows someone to cut off harmonics and any unwanted parts of a sound. For instance, if I wanted to make a pad I would use the ES2 synth in logic and from there I would select the waveforms I wanted and configure the levels on the ADSR envelope. I would not use an LFO envelope because I don't use LFOs when I make my pads. From there I would open the multimeter and take a look at the harmonics of the pad. If I saw some unwanted hi end I would use a lo pass filter to cut off the harmonics in the hi frequency range. This will make the pad sound more deep and mellow without the hi end piercing my ears.

FM synthesis - FM stands for Frequency Modulation. FM synthesis involves two frequencies, they are called the carrier and the modulator. The modulator causes a sound to have a wobble vibrato effect and the carrier is the starting point. When I was experimenting with the EFM1 plugin I used the harmonic wheel on the carrier oscillator to create the sound I want and from there I used the harmonic wheel on the modulator wheel and wave wheel to make the sound I was creating sound more aggressive. After, I used automation on the LFO rate to create a wobble vibrato this made my created sound become a downlifting sound effect. After experimenting with the EFM1 plugin I understand how effective it can be and how useful it is. Not only can the EFM1 plugin be used for FM synthesis but the ES2 plugin and DM7 keyboard are also great for this type of synthesis.

Bibliography

http://abovegroundmagazine.com/columns/pro-logic/10/26/understanding-sound-frequency-a-guide-to-hz-and-khz/

http://wkcphilrobinson.blogspot.co.uk/search/label/Beginners%20Guide%20To%20Synthesis

https://www.youtube.com/watch?v=k6lVhGeyXuw

Monday 1 December 2014

Defining Events

The event I will be writing about is the end of year performance that took place at Westminister Kingsway College. This was a small event where parents and friends of the music students would watch them perform live at the end of their first year. The show lasted for over an hour and a half and their were lots of great acts in that time and this is an event that takes place in the college every year. This event also had a small bar where people can buy alcohol or juice.

The key things that made this event a success is:
  • The music didn't have explicit content
  • All the equipment was working properly
  • The sound engineers and technicians knew what they were doing
  • There was good lighting and smoke machines
  • The musicians performed fantastically to the audience
  • The DJ played different genres of music the people liked
Before this event happened, the event needed to meet important targets. These targets are:
  • Promotion
  • Venue
  • Costing
  • Equipment
  • Performers
  • Tickets
  • Performance rehearsals
  • Posters
  • Banners
  • Sound engineers & technicians
  • Staging equipment
  • Lighting
  • Food & drink
Without these important factors the event would have not happened. These factors are essential for live gigs because a gig is a live show where musicians go on stage and entertain the audience and the audience will expect a great performance from each act. Promotion plays a big role. Promotion is the main factor that attracts people's attention to the gig that is taking place. Social network sites such as facebook & twitter are very useful for promoting an event. Also, sticking up posters is another way of promotion. The more promotion an event obtains, the more people are likely to show up at the event.

Without promoting an event, the people of the public won't know about these new and upcoming bands and solo musicians. Promotion will help the artist (s) mostly because it will not only attract other people's attention but it will help them get recognised by A&R consultants. A lot of people attended the end of year performance and this happened because the promotion of the event was spread across in and out of the college and the music students that were performing there told their parents and friends to come along and watch. 






These two videos show two songs performed by L2 music. By watching these two videos you can instantly tell that this music class rehearsed really hard. It took them a few months to choose or create the songs they were going to rehearse and perform to the college. It was crucial that L2 music, L3 music performance and production 1st year and 2nd year classes months before the actual show because the students will have been well prepared and organised for what they needed to do to make the show a brilliant success and that it why it was crucial to be well prepared.

The sound engineers & technicians also played another key role in the end of year performance. The sound engineers & technicians are responsible for setting the equipment up and making sure the gain levels for all the equipment are equal. The sound engineers & technicians who are working before, during and after the gig must always pay attention to what they are doing. This means no one else should be distracting them while they are concentrating on what they are doing because if there are distractions then the sound engineers & technicians will lose focus on what they are doing and this can become a serious problem in a live gig.

The sound engineers & technicians that were supervising for the end of year performance did a brilliant job at making the event run smoothly and sound perfect. They really understood what they needed to do and what needed tweaking.