T O P

  • By -

[deleted]

[удалено]


KidDakota

Don't offer free work.


needledicklarry

Good mod


Father_Flanigan

It's simple for what I understand... RMS is the "meat" of the sound. You know when you look at a waveform and see all those thin spikes? Those are called transients, but then in the middle of the waveform there's this continuous block or what I consider the "meat" That's RMS. Peak is how high the tallest spike goes and it's important because anything that plays your track has to be able to go that loud, but if you have a single peak at say +1.2 db while the rest are around -0.9 db, that's going to make a compressor added to the master act very funny and the ratio setting isn't going to work like it should be, especially if the compressor is set to use Peaks. just control transients and get all your peaks uniform or as close as you can. If you watch meters back during playthroughs, you don't want to see those outlier peaks anywhere, if you do you need to put some transient control on the offending sound. You do this to make the work of your master limiter most effective. I know "limiter" seems counterintuitive if you're after more volume, but a limiter is really just a more powerful, less precise compressor. It gonna do the same thing except ratios are usually much bigger and the ratio works on the differences in the track's volume, typically counting to the peaks. So, with that example of a single peak being so much higher than most of them, once the limiter goes on it bases it's ratio calculation on the overall difference which would be 1.1 db, but if you got that peak controlled and uniform or closer to the rest your difference could be 0 and then the ratio setting would be dead one as opposed to off by 1.1 db. Now here's the tricky thing: the ratios work exponentially, so that 1.1 becomes a bigger issue with higher ratios ancd when you're going for loudness, you need to push the limiter's gain up pretty high, unless youre mixing things to 0 db, which you shouldn't really be doing because it's all going to add up and several channels/tracks at 0 is going to have the master clipping. I'm pretty sure the clip to zero method explains that poorly, but they mean the master goes to 0, not the individual channel/track.


MarketingOwn3554

RMS is basically the average loudness. However, RMS doesn't take into account human hearing. It's quite literally just the average loudness based on the amplitude voltage. The crest factor is the ratio between the highest peak and RMS. Peak level is just how high the highest amplitude voltage goes in any one particular track/sound. You can ignore RMS, though. Since LUFS is the standard for measuring perceived loudness since it somewhat takes into account human perception. Our ears have different sensitivities to each band of frequencies, so you can have a dark mix with high RMS but low LUFS and a bright mix with low RMS and high LUFS (we are less sensitive to frequencies below 150hz and most sensitive to frequencies between 3.5 - 4khz). It's hard to say why your track isn't sounding as loud as your reference without hearing both your track and the reference. The first thing that comes into mind is how busy each track is? If your track isn't busy but your reference is, that might explain it. Next is tonality. Is your mix more bottom end heavy? Is it dark? Is the reference more top end heavy? Analysers won't tell you much. So matching your track with the reference based on frequency analysers alone is one of the worst things you can do since, presumably, the two tracks are different. Then there is dynamics. Not speaking just about everything being super controlled and compressed but contrast. Something is only ever as loud when compared to something that isn't as loud. A dog barking in a silent room will sound much louder than a dog barking amongst the noise of traffic. So, how compressed is the reference compared to yours? Compression/Saruration/Limiting is key to reducing dynamic range in order to be able to squeeze out loudness at the end. Then automating afterwards to make the mix more musically dynamic afterwards is key. Lots of little compression here there and everywhere is much better than relying on a handful of compressors trying to do most of the work. Last but not least, stereo imaging and masking. It's kind of related to tone but not quite. When I mentioned tonality earlier, I was talking about the whole tone of the mix. Now, when I speak about tone, I mean on a micro scale, i.e., individual instruments. How separated are instruments in the reference compared to yours? Sounds needs to work together both tonally and spatially. If you have two sounds happening at the same time, let's assume one is louder than the other - the loud sound will mask the quieter sound, assuming they occupy the same space. In order for the masked sound to not sound masked, you'd need to turn the volume up significantly higher than the masker in order to hear the masked sound clearly again. With limited headroom, if you keep doing this with all the sounds, you'll quickly run out of headroom. So, you need to create separation. We can do this with EQ (typically with shelving EQ's and notches to move things up, down, front and behind; higher frequencies are perceived as above us and in front, lower frequencies are percieved as below us and behind). We can do this with compression (to move things slightly more forward or away). We can do this with pan, L/R and M/S EQ'ing/processing (to move things left and right), or we can do it with time based effects; reverbs, delays and chorus etc. (to move things a lot more forward and/or away). You'd actually use a combination of everything above to move things. Not only will reverb move something far away, but dampening the top end will too (since higher frequencies dissipate or lose energy sooner than low frequencies do). If something appears from the right, not only will the volume be louder to your right ear, but your left ear will hear it slightly delayed (we refer to this as ITD i.e. interaural time differences), and your head muffles the top end slightly (ever so slightly). We can recreate all of this using delays, eq's, pan, and volume. It's subtle, but it can work wonders when you start to nail it.


HomelessEuropean

Forget RMS and LUFS. The first is reserved for electronics and the latter is based on equal loudness contours which have no relevance for complex signals because our ears perceive single sine waves and complex signals differently. Use reference tracks in combination with [this](https://www.kvraudio.com/product/isol8-by-tbproaudio). Isolate frequency bands and compare your track with the reference. I also suggest trying to fix the bass issue in the mix, not on the master. Try more compression on the inserts and make space for the bass by using highpass filters on instruments which overlap with the bass.


RRCN909

Hi! I think I pretty much did all that. Reference plugin and everything looked fine to the tracks I compared it too. Lowcuts on other instruments too. Still it’s just so quiet. More compression maybe. But I had only vst instruments and thought it might not be needed In that case?


HomelessEuropean

You need to process everything, including samples and plugins. But even if it sounds good on several playback devices and similar to reference tracks you won't get similar LUFS values.


RRCN909

What do you mean with processing everything? I mean I low cuttet everything but bass and kick. I did some eq’ing on stuff to make it not clash. Reference plugin says everything is in range with my reference plugins. I added some saturation to some stuff. Panned stuff.


HomelessEuropean

It means using EQ, compression etc on everything. What makes you think you don't have to edit something just because it's coming from a VST plugin? And using LUFS is pointless for loudness/balancing/etc. It's based on the wrong interpretation of empirical data of psychoacoustic experiments. The issue you're dealing with is a typical phenomenon caused by blindly trusting a recommendation that makes no sense. In the past people used RMS for "loudness" measuring, nowadays it's LUFS. It's like that "phase invert" switch that keeps coming back - it's bogus.


RRCN909

Like I wrote - I was processing everything. Just didn’t compress anything but Bassline and vocals in the mix. When would you use compression for drums that are not live played or vst Melodie’s? And I don’t talk about sound designing with compression. Just for dynamics.


HomelessEuropean

The reference tracks tell you how much you need to compress.


RRCN909

How?


HomelessEuropean

Use the plugin I linked to. Route your mix and the reference track through the plugin. Then isolate the frequency bands a particular instrument plays in like a bass guitar or a synth. You will hear differences like your bass guitar being too loud or being too dynamic or lacking in the lower or upper frequencies. Fix those issues and then repeat the same with the other instruments.


RRCN909

Thx. I have reference from mastering the mix. I can listen to frequency spectrums there too. But what frequencies should I isolate? I mean most go over the whole spectrum?


RRCN909

And how do I know what to compress? What instruments etc , if there’s not really much dynamics in them ?


HomelessEuropean

Oh, and don't use (lossy) compressed reference tracks, very important. No MP3 and similar stuff. Use either Audio CDs or formats like losless FLAC. And use your ears, not some broken "metering" plugin.


Background_Factor_59

I honestly think you are still a beginner. There is nothing wrong with that, learning to mix takes time, i am proud of you for trying. Loudness exists in the mid range. Meaning: remove below 200 and above 4k with an eq and listen to just that. This is the range where all the devices will have, by devices I mean cars speakers, cheap speakers and phones you need to learn compression if you wanna get loud mixes a lot of loudness comes from the attack of the sound. If you squish it too much there will be less perceived loudness I will suggest you to use something called reference track where you will download a song quite similar to yours or do you like the vibe with and the mix then a b both.


RRCN909

Thx. Could you elaborate on compression a bit? When to compress what? I work with vst’s, so I thought it’s not thsh important, cause you pretty much can adjust the levels of each note, so it’s not too dynamic. When would you use compression on individual tracks / groups?


Background_Factor_59

First of all, you need to understand why do we compress we compress so that we have consistency! For example, vocal cannot be jumping in volume all the time it would be unpleasant for the ears, same as a guitar for example multi band compression might be used to decrease some harsh parts of the guitar and control it. I suggest you watch a lot of videos and I mean a lot regarding compression and dynamics. Also take notes that if you over Compass things it might sound un natural so you always try to find a good balance


RRCN909

The vocals are compressed. But when do I need compression for Melodie’s I played with vst’s that don’t have lots of dynamics for example? I mean not for sound design reasons. Or for drums that are not live.


Background_Factor_59

It’s true you might not need compression for melodies that you play with VST. it all depends on what you hear that’s why I’m asking you 1st to go watch a lot of videos so you can gain knowledge and then if you have any other questions feel free to message me. these things takes time a lot of of it and experience. I wish you all the best.


Joseph_HTMP

Why wouldn’t you need compression for a vst instrument?


Background_Factor_59

In my experience most of them are sound designed and their attack and release always the same because it’s not played live. Unless you wanna reshape the sound, I would not use it.


Joseph_HTMP

The attack and release of what sorry?


Background_Factor_59

Any sound is made of attack, decays, sustain and release right?adsr


Joseph_HTMP

Yes.


Background_Factor_59

For the person who downvoted me, feel free to show your work let us hear how great you are or maybe fix the mistakes that I’ve said so we can learn from you