Casual Harmony: Casual Harmony

All things Recorded A Cappella Review Board.

Postby Box_Beatin_Lady » Thu Jun 11, 2009 5:04 am

That's fair (re: use of the term "faulted"). But while the scores were good, the reviews did bring up this point, and it was something that I deeply disagreed with... and what good is the Internet if I don't use it to bitch about things? :)
My lip gloss does nothing. Damn it.

NYC Red States = New Hotness
http://www.redacappella.com
Box_Beatin_Lady
 
Posts: 340
Joined: Mon Nov 14, 2005 9:44 am
Location: New Jersey

Postby H.F. » Thu Jun 11, 2009 6:02 am

Solid point :)
H.F.
 
Posts: 460
Joined: Wed May 04, 2005 7:42 am

Postby mcbc » Thu Jun 11, 2009 6:56 am

kevin47 wrote:Historically, Wibi, the Bubs, Off the Beat and any number of Stanford groups come to mind.


otb? Yes. Talisman? Yes. The others? I'm a little lost on. Wibi started out as a vocal jazz-ish group and did jazz standards. Then they rolled into pop with Rosanna, Peg, Sesame Street, John Mayer, Whitney (and not obscure Whitney b-sides) but I Wanna Dance Whitney ... and umm In Your Eyes. These groups strike me as more groups that made it not by what they sang but how they sang it. And yes, they are "old." But bygones, I don't want go into more tangential stuff.

H.F. wrote:I really don't see how they were "faulted" for it. They got a 4. On their second CD. And they got some cara noms and one win. I don't see too much "fault" being found there :)


Agreed. And that's one of the points I tried to make in 400 words or more vs. your 40 words or less. :)

But peel away all the snark n' stuff on RARB and I see that one of the larger discussions here is always what makes an album good vs. great; just how to get that 5th star. And hopefully that's where this thread will end up.

The two biggest knocks on the album were innovation/creativity and repeat listenability. And therein lies the rub b/c personally, I can't buy that if CH did an album of obscure Brooklyn hipster rock covers, 'about to hit it big' Britpop, a song in 7/8 and maybe some MIA for fun that they would have gotten 5-stars. No, they'd end up w/ an album that wouldn't sell and a worse review. But if I'm wrong say so -- and why ;)
mcbc
 
Posts: 381
Joined: Tue Jul 20, 2010 7:27 pm

Postby dekesharon » Thu Jun 11, 2009 8:05 am

Beware grade inflation, and moreover grade expectation.

Assuming Take 6 and Vocal Sampling are 5s, it is quite reasonable to consider a 4 a complete success as a college group.

Always room for improvement, of course. Amateur singers, new tradition.

4 is very good.

- Deke Sharon • 800.579.9305 • http://www.dekesharon.com

dekesharon
 
Posts: 1585
Joined: Fri Jun 13, 2003 8:01 am
Location: San Francisco

Postby daverab » Thu Jun 11, 2009 8:57 am

DekeSharon wrote:Beware grade inflation, and moreover grade expectation.

Assuming Take 6 and Vocal Sampling are 5s, it is quite reasonable to consider a 4 a complete success as a college group.

Always room for improvement, of course. Amateur singers, new tradition.

4 is very good.


No doubt a 4 is a great overall score especially when put on a scale that includes pro groups!

My personal qualm is that some reviewers seem to review comparing college albums to other college albums and pro albums compared to other pro albums. And then those that compare an album to everything out there.

I guess the reason I came to this conclusion was because our first album scored a 3.7 and had comments often referring to how impressive the album was considering it was a freshman CD. Both the comments and the scores seemed to heavily reflect that fact.

Whereas the 2nd time around the new album jumped to a different scale and was compared to untouchable groups scoring a 4.0 even though it was WAY better than the first album. Comparing one album to the next you would expect a 4.3 or even 4.7.

So what happened? Were the kid gloves taken off and reviewers being more honest? Now grading on a correct scale no longer weighting the fact that it was our first cd?

From album to album you would expect a larger jump than .3 pts considering the improvement. Now I do understand that the difference between a 3.7 and 4 is large and the art form has grown since our first album so considering that I wasn't expecting 5's.

My question is more along the lines of how to consistently review albums on one scale vs different scales when considering the groups background and/or production choices.

Should all albums regardless of who made it, whether or not its the first go around or the 100th, be compared to others in the same league so you can make an apples to apples comparison or is it fair to score a new group or a different style group on a different scale?
Should all "heavily produced" albums be scored on a "heavily Produced" scale? I don't know the answer....

Perfect example - If a high school group submits a cd and for a high school group they are PHENOMENAL but they are only as good as the average college group - are the reviewers really going to score it a 2 or 3 because that's how good it truly is or will they score it a 4 or 5 given their limitations, and/or experience etc?
David Rabizadeh
Director, ICCA

Founder
Rutgers Casual Harmony

Member
Rutgers Orphansporks 02'-07'
daverab
 
Posts: 39
Joined: Tue Feb 19, 2008 1:49 pm

Postby seth » Thu Jun 11, 2009 9:39 am

Reviewers are instructed (and periodically reminded) to give scores without regard for the artists' personal circumstances, though they're free to talk about such things in their comments. Some people find this rule hard to follow, which is unsurprising given how compelled people tend to be by a good struggle-against-all-odds story.

Plus there's the whole "sounds pretty good for a band without any instruments" angle many of us are already comfortable with. To many people, a cappella is about process as much as it is results. (The best way to get that guitar sound is to pick up a guitar, but this is not how we do it.) RARB tries to report on the results, but it's often a struggle against one's own nature to ignore the story of process, even in order to judge fairly and usefully.

That said, however difficult the job of reviewing is, I'd rather we be judged on what we deliver. ;)
seth
Site Admin
Site Admin
 
Posts: 667
Joined: Wed Oct 30, 2002 1:56 am
Location: San Francisco, CA

Postby dr00bles » Thu Jun 11, 2009 9:51 am

DJR wrote:My personal qualm is that some reviewers seem to review comparing college albums to other college albums and pro albums compared to other pro albums. And then those that compare an album to everything out there.


Official RARB policy is to "grade albums in comparison to the general body of a cappella recordings available." That's taken from here.

So, David, a possible answer to your question about your curiously modest growth in score from your first album to your second is this: Over time, the general body of recordings changes. Listeners are exposed to new, better, sharper, and more original recordings that constantly set the bar higher. Even within the two-year time frame between your albums, the competition has grown fiercer.

But I will say that, at least for me, this is only relevant when sorting and ranking albums. Technically, a reviewer's absolute feelings about an album shouldn't be influenced by the competition available, but where he/she feels it stands among others will likely change over time. It's quite possible that in 1993, if I listened to your CD, I'd write the same things in my qualitative review, but score it differently quantitatively.

Ultimately, though, the best albums will be the best, regardless of time frame. "Casablanca" is still considered one of the movie greats in history, after all.

Also: Technically, a high school album should be held apples-to-apples against a pro album. You can still qualify in a written review that a group is very good "for a high school group," but that doesn't merit it a 5 if there are still major problems with the record. I don't want to listen to 16 year-olds who are RELATIVELY good; I only want to listen to 16 year-olds if they're ABSOLUTELY good.

Remember also that a "4" is a good score. It's admirable! You should be happy, no question about it. Hell, I was freaking ecstatic when the Achordants got all 4s for High Stakes Old Maid.

My advice is to take the reviews for what the are and derive any learnings that you can from them.

Then, next time, you can watch Casual Harmony apply said learnings and grow closer to that coveted 5.0!
Andrew DiMartino
Music Director, UNC Achordants 2005-2008
dr00bles
RARB
RARB
 
Posts: 22
Joined: Tue Jan 01, 2008 1:34 pm
Location: New York, NY

Postby daverab » Thu Jun 11, 2009 10:26 am

Absolutely Andrew!

like I said - there is no doubt that I am pleased with the overall numbers and I wasn't expecting that coveted 5.

I guess I was talking more to the comments than the numbers now that I think of it -
you know the world we live in - a society that has been based on quantitative grades. You may write a masterpiece thesis and all the comments may be glowing - only to get a C and feel like a failure - it's happened to us all hasn't it??

A combination of Seth's and your response I think sums it up - yes there are policies and we do our best to adhere to them but we are all humans and as objective as we'd like to be in a review and compare apples to apples everyone has a bias. Whether it be empathy for a rags to riches story or preference for an all-male sound over a all-female sound.

I only comment on this in an effort to figure out a way to even out that playing field if possible -

I propose to Rarb and/or the a cappella community - I realize the policies are to "grade albums in comparison to the general body of a cappella recordings available." However - is that the best way?

People put a lot of weight in numbers at face value regardless of comments attached to it- a 3 doesn't beat a 4 and I'd rather buy the "4" album no matter what the comments say.

However if at face value I knew an album was a high school album and it was graded on a high school scale - I'd quickly be able to interpret those numbers without reading comments making those numbers a lot more valid and meaningful. Whereas now - I always hear "yea it scored a 3 but read the actual review they said really nice things"

Right now it seems like the numbers are irrelevant and all that matters are the comments...which I'd be fine with too - how about - removing the numbers all together?? ;)
David Rabizadeh
Director, ICCA

Founder
Rutgers Casual Harmony

Member
Rutgers Orphansporks 02'-07'
daverab
 
Posts: 39
Joined: Tue Feb 19, 2008 1:49 pm

Postby dherriges » Thu Jun 11, 2009 10:58 am

The individual numbers from each reviewer are very useful to me I think - provide an instant sense of "good", "amazing", or "average." The averaged score is less valuable - I honestly mostly ignore it, since a 4.0 means something very different if it was straight 4's (it means the reviewers unanimously thought the album was without major flaws), versus a 5, a 4 and a 3. I don't at all mind having the aggregate scores, but I think they shouldn't be what groups focus on, especially when it's so easy to see from the little horizontal rows of dots next to the review how the 3 reviewers each scored it.

Getting all 4's, as a college group, is an accomplishment - but it does seem like that 4.0 score is becoming a ceiling for many collegiate recordings (except some groups like the Bubs who are consistently pushing the innovation boundaries and putting the rest of us to shame). Would love this thread to continue, as someone else said, along the lines of what separates the 4's from the 5's - how do we aim for greatness, especially when the raw vocal talent available to collegiate groups is generally not at a pro level (maybe with the exception of 2 or 3 amazing soloists per group)?

Also, this has come up before for sure, but "this album vs. our last album" comparisons are really not valid, because the bar has been raised so quickly for recorded a cappella in recent years. I should know - my group's last CD scored a 3.0, compared to a 4.0 for their (IMO inferior) previous effort. I had to spend a good deal of time explaining the raised-bar effect to disappointed group members.
dherriges
 
Posts: 552
Joined: Tue Jul 26, 2005 5:37 pm
Location: San Francisco, CA

Postby dr00bles » Thu Jun 11, 2009 11:29 am

dherriges wrote:Would love this thread to continue, as someone else said, along the lines of what separates the 4's from the 5's - how do we aim for greatness, especially when the raw vocal talent available to collegiate groups is generally not at a pro level (maybe with the exception of 2 or 3 amazing soloists per group)?


It's a great question, and one that I think deserves a proper discussion.

The fact is, when considering a 1-5 scale, you should expect a normal bell-curve distribution: few 1s and 5s, somewhat more 2s and 4s, and even more 3s. This is generally why we see fewer 5s and 1s - it's not often that something is SO below or above average that it warrants a min/max score.

Now, per my previous comment, we hold all groups to the same standard. That said, it should be expected that pro groups will bring in a 5 more often than collegiate groups. That's what makes them pros! Think about it. In general, NBA players are of higher quality than NCAA players. Only the very best collegiate ball players are comparable to the pros. It's the same with a cappella. It's the same, actually, with most anything.

That's not to say that pro groups automatically get 5s, or that collegiate groups should not expect to see 5s. But there are far more collegiate groups than pro groups, so the talent per group is not as concentrated in college. And as collegiate groups don't always have the best singers, they sometimes need to rely on other factors in order to get the best score possible.

And so we arrive at your question. I'd say the best thing that a collegiate group can offer is originality. Show us something we've never heard before. Attempt new interpretations. Perform daring arrangements. Write original music! This stuff will definitely get people's attention. Especially today, when so many albums are sent through the machine and come out as shiny, compressed, technically "perfect" records, creativity is essential. We've heard plenty of ultra-processed Gnarls Barkley at this point. Give us something that will make us mutter, "I can't believe they're doing this!"

Also, have a solid identity. As I mentioned before, so much collegiate a cappella sounds the same these days. Having a brand will give your group an edge. It starts with your group's sound, and can extend to your packaging, your content, and your website, and just about anywhere else your group "lives." Nothing contains more information that does a brand. Think about, say, Pepsi. So many emotions, images, thoughts, and tastes run through your head! All from one word. The more your brand stands for, the more you will stick out.

And finally, the quality you put into an album equals the quality you'll get out of it. And believe me, it doesn't happen on the production end. It happens when you open your mouth. If you have great raw tracks, it'll show in the end. And conversely, if you have not great raw tracks, no amount of gloss will hide it. Even if your singers aren't the best, if you come in with well-rehearsed and enthusiastic singers who take the time to do it right, you'll be happy with the outcome.

So, differentiate, develop a brand identity, and produce quality work. Is it clear I'm in advertising?

More importantly, what do others think?
Andrew DiMartino
Music Director, UNC Achordants 2005-2008
dr00bles
RARB
RARB
 
Posts: 22
Joined: Tue Jan 01, 2008 1:34 pm
Location: New York, NY

Postby colton » Thu Jun 11, 2009 3:06 pm

DJR wrote:Perfect example - If a high school group submits a cd and for a high school group they are PHENOMENAL but they are only as good as the average college group - are the reviewers really going to score it a 2 or 3 because that's how good it truly is or will they score it a 4 or 5 given their limitations, and/or experience etc?


It should be the former, not the latter, if the reviewer is doing his/her job correctly. That is, we are told to judge on an absolute scale, "relative to the general body of a cappella" (or something like that).

You should be able to answer this question for yourself once the reviews for this album show up: http://rarb.org/reviews/932.html
(Hopefully just a couple of weeks.)
colton
RARB
RARB
 
Posts: 543
Joined: Mon Mar 03, 2008 1:45 pm
Location: Orem, UT

Postby RnBMrE » Thu Jun 11, 2009 3:14 pm

Man, it's about time we had another one of these unnecessarily drawn-out review threads again.

Viva La Forum!

Matt Emery CASA Director of Communications Three-time Recipient of RARB "Post of the Year" Title

RnBMrE
 
Posts: 712
Joined: Mon Mar 06, 2006 8:14 pm
Location: Washington, DC

Re: Casual Harmony: Casual Harmony

Postby billhare » Sun Jun 14, 2009 6:06 am

dherriges wrote:
WareHauser wrote:I'm sure I could find plenty of 5 vs. 2, and 4 vs. 1 scenarios.


Song, yes. Album, VERY unlikely. Maybe Seth or someone would know how many times that's ever happened? A 2-minute skim of the review list spotted only this one: http://rarb.org/reviews/815.html

Sure, there's personal taste and such involved, but I also think the RARB reviewers are generally competent judges of, well, competence.


Haha, and today, it finally happens for the first time in RARB history! Very prophetic that we would be on that subject this week.
http://rarb.org/reviews/923.html

Bill Hare Some dude who records and mixes people who can't play instruments. http://www.dyz.com

billhare
 
Posts: 2002
Joined: Thu Feb 13, 2003 11:14 am
Location: Silicon Valley, CA

Postby H.F. » Sun Jun 14, 2009 6:32 am

Hmmm. I wonder if that 5 will be changed.
H.F.
 
Posts: 460
Joined: Wed May 04, 2005 7:42 am

Re: Casual Harmony: Casual Harmony

Postby cmarston » Sun Jun 14, 2009 1:08 pm

billhare wrote:
dherriges wrote:
WareHauser wrote:I'm sure I could find plenty of 5 vs. 2, and 4 vs. 1 scenarios.


Song, yes. Album, VERY unlikely. Maybe Seth or someone would know how many times that's ever happened? A 2-minute skim of the review list spotted only this one: http://rarb.org/reviews/815.html

Sure, there's personal taste and such involved, but I also think the RARB reviewers are generally competent judges of, well, competence.


Haha, and today, it finally happens for the first time in RARB history! Very prophetic that we would be on that subject this week.
http://rarb.org/reviews/923.html


Seriously though, what's the deal with the Ridin' Derby review? It seems strange to me that personal preference on levels of auto-tune (which is the only useful information I managed to get from this review as a whole) would create such ridiculous disparity in the scores.
Cameron Marston
Fundamentally Sound (UW-Madison)
President '08-'09
--------------------------------------------------
http://www.FSacappella.com
http://www.youtube.com/FSacappella
http://twitter.com/FSacappella
http://www.myspace.com/fsacappella
cmarston
 
Posts: 39
Joined: Sat Dec 13, 2008 7:58 pm
Location: Madison, WI

PreviousNext

Return to RARB

Who is online

Users browsing this forum: No registered users and 0 guests

cron