A hypothetical/experimental question I've always wanted to test.
|
View this Thread in Original format
DJ Robby Rox |
I know they did that Reason comparison doing the same tracks one in Reason in one in a real studio and than they compared the 2 and they were almost identical.
This is not that, but similar. And I have no idea if that whole experiment was secretively funded by propellerhead but for some reason I just don't buy it. I'm not sure if they mentioned in that little experiment what the specs of the Reason studio were vs the hardware studio, but I'm focused on variations between sequencers, and general signal chains from computer to computer.
I'm puzzled by the idea that all sequencers sound the same from one person to another. I'm also puzzled by the idea that we ALL have different computers, mobos, processors, sound cards, interfaces, dacs, there is NO WAY theres not variations between our signal chains. And I want a quick way to put this to the test. I was thinking of taking a 1 or 2 bar segment of a random pro track, and having maybe 5 or 10 people download the segment, than all render in their sequencer of choice the same wav file with the same specifications. We all route the wav to our master, keep it on 0db than do a quick render and upload the new file and compare.
My real question now, would this be a waste of time or a possible eye opener for producers? We say certain things can affect sound quality, but I want to see to what extent sound quality is actually affected.
We also say other things do not affect sound quality, and I want to see if thats true or not (sequencer would be an example). I have a continual sneaking suspicion that my signal chain is low, if even just 1 db, I just wanna compare it to some of you guys who are using logic and a $1000 soundcard/interface, or other hi-end equipment. I want to see if anyone can actually hear a difference.
Would everyone rendering the same wav file be a valid way of testing this? |
|
|
Eric J |
I don't think this would work the way you propose it. If you send me a WAV file and I simply import it into my sequencer and then re-render it out, its not going to sound any different. Data is data, all 1 and 0. There might be a minuscule difference stemming from the actual rendering algorithm, but its not going to be something audible.
Now, if you were to take your studio setup, with your soundcard, monitors and room and listen to that same WAV file, then be able to listen to that same WAV file in my studio, you would definitely hear a marked difference because (presumably) I have higher quality monitors, higher quality audio interface, high quality digital-to-analog conversion and acoustic treatment in my studio.
However, we really cant replicate that scenario through files. You'd need to physically be in your studio and then in mine, or have both sets of equipment in the same place, to really hear a difference.
The only thing that I can see that might be useful, is for someone to render a single sound, say a 4 bar synth arpeggio into a WAV through different converters. So then, obviously if you have a $200 M-Audio interface, its going to sound worse than my $2,000 Apogee.
That being said, the advantage of having the Apogee in the first place is more about being able to hear things better when composing and mixing, rather than rendering through the actual unit (although I'm going to try this over the weekend). Once everything is finished, I still render in the DAW, which takes the D/A conversion out of the equation. However, because i can hear everything so much clearer, that means that my mix sounds better, which ultimately translates to a better track overall.
The problem is that even listening to the file rendered from my studio you'd still be listening to the end result through that same lower quality converter. So when you played it back, you wouldn't really get the full effect of how much better the conversion was. You'd know it was better ,but you couldn't really hear all the ways in which it is better, like crisper highs, defined mids and controlled lows. You's still need to be in my studio to hear those differences. |
|
|
KilldaDJ |
meh, its all down to your gear. if you're churn it out of crap gear, it'lll only sound . |
|
|
Magnus |
I've been thinking of doing something similar to this but to compare the sound engines of several of the more popular sequencers. Mainly, my idea was to create an identical track with a wide range of various plugins and synths in say Cubase, then re-create the exact track in Logic, Ableton, and Studio One. This would all be using the same audio interface since mine can easily be moved over to a Mac for testing in Logic. This way I could attempt to detect any variance between the 4 renders. I've always heard the sound engine in Logic is superior and this is my main motivation for wanting to try this experiment.
I'm guessing there would be no noticable difference if I were to post them all up for everyone to try to identify but until I actually try this or someone else does, I'm always going to be curious. |
|
|
Existo22 |
quote: | Originally posted by Eric J
I don't think this would work the way you propose it. If you send me a WAV file and I simply import it into my sequencer and then re-render it out, its not going to sound any different. Data is data, all 1 and 0. There might be a minuscule difference stemming from the actual rendering algorithm, but its not going to be something audible.
Now, if you were to take your studio setup, with your soundcard, monitors and room and listen to that same WAV file, then be able to listen to that same WAV file in my studio, you would definitely hear a marked difference because (presumably) I have higher quality monitors, higher quality audio interface, high quality digital-to-analog conversion and acoustic treatment in my studio.
However, we really cant replicate that scenario through files. You'd need to physically be in your studio and then in mine, or have both sets of equipment in the same place, to really hear a difference.
The only thing that I can see that might be useful, is for someone to render a single sound, say a 4 bar synth arpeggio into a WAV through different converters. So then, obviously if you have a $200 M-Audio interface, its going to sound worse than my $2,000 Apogee.
That being said, the advantage of having the Apogee in the first place is more about being able to hear things better when composing and mixing, rather than rendering through the actual unit (although I'm going to try this over the weekend). Once everything is finished, I still render in the DAW, which takes the D/A conversion out of the equation. However, because i can hear everything so much clearer, that means that my mix sounds better, which ultimately translates to a better track overall.
|
Just out of curiosity what apogee do you have? |
|
|
DJ Robby Rox |
quote: | Originally posted by Eric J
I don't think this would work the way you propose it. If you send me a WAV file and I simply import it into my sequencer and then re-render it out, its not going to sound any different. Data is data, all 1 and 0. There might be a minuscule difference stemming from the actual rendering algorithm, but its not going to be something audible. |
Yeh see thats 1 thing I kinda wanted to test. Cause you're saying "data is data", but sequencers are using different algorithms right? And your claim is the difference wouldn't be "audible" but it just seems odd. Why not have a standardized rendering algorithm?
What if for instance, in imagelines attempt to mirror another daws algorithm (maybe they start writing them from scratch who knows), what if their algorithm simply isn't as good?
I mean if algorithms from synths can create such vast variations in sound, how is a rendering algorithm any less susceptible to variations?
Is it possible that certain sequencers (ie. Logic) have algorithms that can handle more tracks or more sound in general? While conversely if I tried to overload Fl Studios with too much sound it becomes difficult for the algorithm to do its job? Maybe it starts phasing or misinterpreting frequencies?
I mean I know better than anyone that largest factor of sound quality is the man behind the keyboard, but I've never seen anyone actually put something like this to the test.
quote: |
Now, if you were to take your studio setup, with your soundcard, monitors and room and listen to that same WAV file, then be able to listen to that same WAV file in my studio, you would definitely hear a marked difference because (presumably) I have higher quality monitors, higher quality audio interface, high quality digital-to-analog conversion and acoustic treatment in my studio.. |
Ok because the main reason for this thread was I just copied 5 out of 6 of Tyas basslines to a *T* (from the FM verano bassline interview part 4). I mean these were sounds I knew, so they were easy to reproduce (except 1 fm sound) and the variation between how mine sounded when playing through FL Studio was enormous.
Obviously theres a MASS of other things going on here. But I found it surprising while solo'd, my layers sounded nearly identical, but together just playing at once in FL it sounded like an atrocious mess of . I even equalized and sidechained as close to how he said everything was done. Its just hard to sit here and blame myself when I don't see what I 'missed' I guess. Thats for another thread however I suppose. I'll keep at it and if I can't get close I may have to post another thread so other people can compare and use their own ears.
quote: |
However, we really cant replicate that scenario through files. You'd need to physically be in your studio and then in mine, or have both sets of equipment in the same place, to really hear a difference.
The only thing that I can see that might be useful, is for someone to render a single sound, say a 4 bar synth arpeggio into a WAV through different converters. So then, obviously if you have a $200 M-Audio interface, its going to sound worse than my $2,000 Apogee. |
I will GLADLY test this once if you'd like this weekend. I really would like to see if the variation is obvious, and to what extent.
quote: |
That being said, the advantage of having the Apogee in the first place is more about being able to hear things better when composing and mixing, rather than rendering through the actual unit (although I'm going to try this over the weekend). Once everything is finished, I still render in the DAW, which takes the D/A conversion out of the equation. However, because i can hear everything so much clearer, that means that my mix sounds better, which ultimately translates to a better track overall. |
I wasn't understanding that 1 important point. I thought a better quality card/interface sent the sound back to the sequencer and if you were rendering it recorded that higher quality right in the sequencer. For some reason I thought even with a ty sound card I'd be able to hear the higher quality rendering that your equipment did. This really just blew my mind apart. So if I can't even hear it on my speakers, than the quality of my own music is limited by what someone else is using to listen to it. It makes the difference seem so minuscule that a better sound card/interface isn't even worth it (I'm getting deja vu now cause I realized you already answered that exact statement once before, about it not being worth it when you explained the various other things a quality interface does).
Granted I understand you might get cleaner his and mids, and that will help you mix better (which prob wouldn't be significant from a placebo in real life) but it would seem that even with a ty sound card, I should be able to get at least 98% the sound a pro is getting?
And right now I'm getting like 40%. This is so confusing because people ALWAYS say its what you know, its your experience, but when I watched that Tyas vid I didn't see a single thing he was doing that I don't already do or know to do. I didn't see a variation in the quality of his sounds when they were solo'd. But meanwhile mine sounds like it was made my a deafman.
I'm more or less ranting now I think but I just shake my head everytime this happens. Its like I can't even get a pro sound by accident, even when I'm purposely copying every single thing they're doing.
This is why I got into production, it really seems like magic at times. I can take a painting of almost anything and copy it to a t, I've done it before because its so cut and dry a process. You have a picture, you copy it, it looks just like the original.
For music, you have a song, you copy all the sounds individually, and it sounds NOTHING like the original. Never ceases to blow my mind.
quote: |
The problem is that even listening to the file rendered from my studio you'd still be listening to the end result through that same lower quality converter. So when you played it back, you wouldn't really get the full effect of how much better the conversion was. You'd know it was better ,but you couldn't really hear all the ways in which it is better, like crisper highs, defined mids and controlled lows. You's still need to be in my studio to hear those differences. |
Ok well thanks for the help. Even when I think about it, anytime I sample from a track and render it it sounds exactly like the original, so its not like I'm hearing a variation now.
I've worried that FL has SOME sort of defecit somewhere sonically speaking, and I've been going back and forth the last few months blaming fruity, than blaming me, than blaming fruity, than blaming myself and it just goes back and forth.
I'm doing EVERYTHING else people are doing. I'm not deaf, I know the rules, and I know when I need to break them. I know what a "phat" sound is, I know what a harmonically rich sound is, I know what too much bass and not enough kick means (or vice versa) I know how to clean up sounds so they fit. It just seems like my knowledge of music is nowhere matching my sound. Granted on this board compared to some people I don't really know a lot of anything, but I know enough to know I'm missing something very basic, thats destroying my music, and I don't know what it is.
I guess all I can do is just keep practicing through the frustration. I don't wanna make great music for money, I don't really give a about money tbo (I have a job I love) I don't care if I made the best track in the world & noone ever heard it. It just doesn't seem like getting a good solid sound should be this difficult. My sound sources are the best they can be, sure I have a computer and cheap pair of monitors, but it doesn't seem like thats the reason for the sound I keep getting.
I'll be approaching 7 years in June (started on my b-day so I'll never forget) and its appauling how far I am from where I want to be. I mean I'm never gonna stop, but if I'm 40 years old (27 now) and can't make a good trance track, I might as well just throw myself off a bridge lol. |
|
|
Nightshift |
you can't blame FL, many people make great music with it. hell, Arnej (aka 8 Wonders) out of all people uses FL. |
|
|
kitphillips |
quote: | Originally posted by DJ Robby Rox
I'm puzzled by the idea that all sequencers sound the same from one person to another. I'm also puzzled by the idea that we ALL have different computers, mobos, processors, sound cards, interfaces, dacs, there is NO WAY theres not variations between our signal chains. And I want a quick way to put this to the test. I was thinking of taking a 1 or 2 bar segment of a random pro track, and having maybe 5 or 10 people download the segment, than all render in their sequencer of choice the same wav file with the same specifications. We all route the wav to our master, keep it on 0db than do a quick render and upload the new file and compare. |
Well its an interesting idea but you've gotten so much wrong here. Motherboards and CPU's and the rest of your setup isn't a part of the signal chain. A plugin on one computer will sound the same as a plugin on another computer. Even your audio interface (assuming you work completely in the box) isn't a part of the final rendered product you release, its just what you created it on, so affects the way that you'll have set the EQs and stuff.
And for your information, its been proven that this sort of test does null. Or at least the audio does correlate down to at least -96 dB. Tests have been done across ableton, logic and PT at least, and none were significantly different.
quote: | Originally posted by DJ Robby Rox
My real question now, would this be a waste of time or a possible eye opener for producers? We say certain things can affect sound quality, but I want to see to what extent sound quality is actually affected.
We also say other things do not affect sound quality, and I want to see if thats true or not (sequencer would be an example). I have a continual sneaking suspicion that my signal chain is low, if even just 1 db, I just wanna compare it to some of you guys who are using logic and a $1000 soundcard/interface, or other hi-end equipment. I want to see if anyone can actually hear a difference.
Would everyone rendering the same wav file be a valid way of testing this? |
So yes the whole thing would be a waste of time.
The reason why different sequencers sound different is because of differences in their pan laws, automation curves (or lack thereof), internal plugins, and things like that. You won't find any magical rendering algoritm that makes logic sound OMG SO MUCH BETTER!!!!1111!!! like a lot of people claim. |
|
|
DJ Robby Rox |
quote: | Originally posted by kitphillips
Well its an interesting idea but you've gotten so much wrong here. Motherboards and CPU's and the rest of your setup isn't a part of the signal chain. A plugin on one computer will sound the same as a plugin on another computer. Even your audio interface (assuming you work completely in the box) isn't a part of the final rendered product you release, its just what you created it on, so affects the way that you'll have set the EQs and stuff.
And for your information, its been proven that this sort of test does null. Or at least the audio does correlate down to at least -96 dB. Tests have been done across ableton, logic and PT at least, and none were significantly different.
So yes the whole thing would be a waste of time.
The reason why different sequencers sound different is because of differences in their pan laws, automation curves (or lack thereof), internal plugins, and things like that. You won't find any magical rendering algoritm that makes logic sound OMG SO MUCH BETTER!!!!1111!!! like a lot of people claim. |
Yeh but lets keep this on the point.
I DID say Fruity and you said tests only confirmed for PT, Logic and Ableton. In real life we don't assume drug D is the same as A, B, & C because we tested A-C and assume D to be equal.
Also, this is not to test your source, I geniunely want to read how that test was done to see if it was even scientifically valid.
People keep saying "this guy works on Fruity, heres his youtube link but no FLP" I have not see ONE FLP supplied for these tracks. Well yeh what is their signal going into after fruity? A Vulture? Some of these tracks sound HUGE. I just want more evidence for myself.
Lastly, I've switched FLs panning law back and forth myself, have rendered in both, and never noticed a single difference. There *could be something else going on that people are ignorant too. Have you even used Logic? Don't you use Ableton? And what about FL Studio? So would you answer everyones complaint about certain sequencers sounding worse by claiming that 1 test proved all these people wrong? Its fairly common for people to complain about certain sequencers. Not only is your test not existent right now, its most likely done incorrectly. And I see no smart reason not to at least do some kind of test. I mean we are producers, we need to be creative at times. =] |
|
|
Nightshift |
quote: | Originally posted by DJ Robby Rox
People keep saying "this guy works on Fruity, heres his youtube link but no FLP" I have not see ONE FLP supplied for these tracks.
|
http://www.youtube.com/watch?v=USB0McctHZM
is this proof enough for you? you can ever hear FL Studio's buffer going crazy (lol) |
|
|
kitphillips |
quote: | Originally posted by DJ Robby Rox
Yeh but lets keep this on the point.
I DID say Fruity and you said tests only confirmed for PT, Logic and Ableton. In real life we don't assume drug D is the same as A, B, & C because we tested A-C and assume D to be equal.
Also, this is not to test your source, I geniunely want to read how that test was done to see if it was even scientifically valid.
People keep saying "this guy works on Fruity, heres his youtube link but no FLP" I have not see ONE FLP supplied for these tracks. Well yeh what is their signal going into after fruity? A Vulture? Some of these tracks sound HUGE. I just want more evidence for myself.
Lastly, I've switched FLs panning law back and forth myself, have rendered in both, and never noticed a single difference. There *could be something else going on that people are ignorant too. Have you even used Logic? Don't you use Ableton? And what about FL Studio? So would you answer everyones complaint about certain sequencers sounding worse by claiming that 1 test proved all these people wrong? Its fairly common for people to complain about certain sequencers. Not only is your test not existent right now, its most likely done incorrectly. And I see no smart reason not to at least do some kind of test. I mean we are producers, we need to be creative at times. =] |
I would never just upload my project files to the internet. Do you expect a magician to take you backstage after his show too? These are trade secrets. Its up to the artists if they want to share them.
A vulture definately isn't a magic button to make your tracks "sound huge" honestly dude, this is a silly thread.
yes I use ableton, yes I've used logic, PTLE, Reason, Sonar and FL, no I didn't notice a difference in the rendering in any of them other than the obvious differences caused by built in effects and stuff. The fact that changing the pan laws made no difference exactly proves my point. There is no difference between DAWs.
Unfortunately I can't find the link to the comparison. I think FL was in there as well. There was a thread on here with a good site comparing the different DAWs and seeing how they processed renders ages ago. Maybe try searching since I don't have time to find it myself. Point is, your test has already been done, it yielded no result.
PS
Please stop bringing up research methods 101 "drug a is different to..." stuff. We know you're a clever psych student already. |
|
|
evo8 |
@ robby rox - all these sequencers that you mention will sound similar enough to the point that they are not going to hinder you from making a "pro sounding" track
Do you have any tracks that you have done recently that we can listen to?
We have all gone through this before where we start to doubt our equipment as the reason our tracks dont sound how we think they should... "if only i had X, Y and Z my tracks would sound far better" - dont get caught in that trap |
|
|
|
|