2008 Final Round Judging Evaluation Report
1.) Using the ten-point scale, please describe today’s Effie judging experience.
Industry / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / TotalAgency / 2 / 5 / 28 / 28 / 8 / 71
Advertiser / 4 / 9 / 7 / 1 / 21
Media / 2 / 1 / 2 / 5
Research / 4 / 3 / 1 / 8
Industry Association / 1 / 1
Other / 2 / 2
More than 1 Industry / 4 / 2 / 6
Total / 2 / 11 / 46 / 45 / 10 / 114
Needs Work:
· Judge in the pm session too k 35 minutes to talk before we even started to judge, and then chatted a lot the rest of the session.
· Cases didn’t follow rules, in upon the category.
· Poor finalists- under delivered Effie final standards.
· Good debate. Less good work.
· Would have linked to see more strong campaigns.
· Need speakers for video. Room too small.
· Some categories need much more weeding out.
· Quality of work well below standards.
· Submissions were weaker than previous session.
· MEDIA (7): Great group, bad moderator. Did not keep things moving, was unsure what to do.
· CLIENT: I would suggest you issue a small card red/green 2-sided. When judges are complete they should turn the card to indicate we can move on to help moderator on timing.
Positive:
· Excellent.
· Very well organized, despite construction challenges. Smaller rooms were hot.
· Getting better every year.
· Good discussion. Maybe it would be better to have the discussion before scoring.
· Great discussion.
· I think the 4 minute video summary adds a lot to clarify.
· Great to see a breadth of work across categories. Good to hear roundtable discussion/perspectives.
· David Budner was great moderator-thoughtful and led great discussion.
· Good group. Fun and engaged.
· Work/case studies a bit weak; process was good.
· Our categories were a bit funky, but it is always a worthwhile experience.
· Very category related.
· Very thoughtful. Great judges.
· Good debate. Less good work.
· Love the dialogue this year.
· Very engaging and well moderated.
· Interesting/insightful.
· Process and team discussion was very good. Videos are a valuable addition to entry reviews.
· Very well run.
· Fascinating to read different approaches to solving a problem.
· Video and discussion were great additions!
· Video and discussion were very helpful.
· Very interesting to learn different categories.
· AGENCY (8): Round table discussion really enhances the process & enables more consistent, calibrated responses-even where there is a difference in POV.
· AGENCY (9): Great to add the conversation part.
· AGENCY (9): Good group, good insights.
· AGENCY (9): Always good
· AGENCY (9): Great interaction
· RESEARCH (9): Fun and interesting
· AGENCY (9/10): The Effies remain my favorite judging experience.
· AGENCY (8): Great discussion around entries
· AGENCY (9): I enjoyed the discussion after each section - also great moderator
· AGENCY (9): Better entries
· MEDIA (7): Great group, bad moderator. Did not keep things moving, was unsure what to do.
· MEDIA ((7): I like the Effie Scoring system sheet - very helpful. Is it new?
2.) Have you participated in an Effie Judging session prior today?Industry / Yes / No / Total
Agency / 43 / 27 / 70
Advertiser / 13 / 9 / 22
Media / 3 / 2 / 5
Research / 4 / 4 / 8
Industry Association / 1 / - / 1
Other / 2 / - / 2
More than 1 Industry / 2 / 4 / 6
Total / 68 / 46 / 114
2a.) How would you rate the overall quality of the cases (this includes the written brief + creative) that you judged today?
Industry / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / TotalAgency / 1 / 4 / 7 / 12 / 9 / 25 / 6 / 7 / 71
Advertiser / 3 / 1 / 4 / 1 / 6 / 6 / 1 / 21
Media / 2 / 1 / 2 / 5
Research / 1 / 1 / 1 / 5 / 8
Industry Association / 1 / 1
Other / 1 / 1 / 2
More than 1 Industry / 1 / 2 / 2 / 1 / 6
Total / 1 / 7 / 10 / 17 / 15 / 36 / 20 / 8 / 1 / 114
Comments:
Needs Work:
· Not ‘tight enough’
· Very uneven; many seemed purely retrofitted.
· Not sure- made me worry.
· Results were not always linked to campaign.
· The results section varied widely from case to case. They were largely weak and hard to discuss.
· Many poor, even questionable cases omitted. A few really good ones.
· Some had to be disqualified.
· Very mixed- some great some very average.
· Quality of work well below standards.
· Very mixed. A few very good, a few NOT very good.
· A few excellent-others ok.
· Some inconsistencies in forecasts and loose interpretations of metrics.
· Typical mix… One fantastic one, a few ordinary, one terrible.
· Only a few standouts. A bit disappointing.
· Morning session was stronger but overall the insights seemed to be lacking.
· Results not specific/tied objectives; uninspired creative.
· Disappointed with the general quality of the cases.
· Results focus is not apparent enough.
· Weak categories with so-so work.
· Average.
· Just weak.
· Felt few broke conventional thinking- particularly at insights level or on work.
· Inconsistent from basic grammar to context.
· I was expecting a far higher standard than this in the US!! This is where it started 40 years back.
· Very inconsistent. Some very good, others should never had made it this far.
· Poor set up for goals and results.
· Just not great insights or executions.
· Very inconsistent.
· Challenges overstated, results, work mediocre.
· There were cases that needed disqualification and others missed basics like prime prospect insights.
· Not particularly compelling- feeling very reverse engineered.
· Would like more insights.
· Some cases were poorly made. Others were not coherent. None share.
· Added videos are nice, but results and research are lacking.
· Some were well done, some were skimpy/vague.
· Categories poorly defined/didn’t follow rules.
· A few great winners and a surprising amount of mediocre finalists.
· Better stories. Better commenting.
· Inconsistent data/lower quality than last year.
· Not enough real sales numbers, fairly dull except a few great ones.
· Very mixed no round 1. (Sus, Suc.)
· Lack of clarity of strategic objectives and poor integration of idea and execution.
· Briefs were weak, ideas were not well integrated or even big to begin with.
· Well composed, but the work wasn’t all worthy.
· AGENCY (7): More consistency among entrants definition of terms (strategy, etc.) would be good. Also better connectivity/information on results. OFTEN there's no frame of reference or insufficient frame provided.
· AGENCY (3): I actually thought the cases were weaker than other years.
· AGENCY (7): Varied
· CLIENT (8): We had quite a few cases where the strategy was better than the creative & measurements.
· MEDIA (8): Some of them are missing the viral/WOM element - no attention to the consumer.
· AGENCY (8): Insights based, results qualified
· AGENCY (7): Two really good ones & two rather weak
· AGENCY (7): Mixed. Some very good work. Some only solid work.
· AGENCY (8): Relevance of data + real comparisons lacking
Positive:
· I did like the 4 minute video.
· Better than years past(this is my 3rd time)
· Overall good, as campaign become more complex, harder for a consistent format.
· The last category (Hispanic) was weaker than the others- Otherwise very high quality and video format is better/improvement.
· CLIENT (10): Clarity of thinking, execution, results
· AGENCY (7): Varied - but some real goals.
· AGENCY (7): The really good ones really stood out.
· AGENCY (9): All good
· AGENCY (9): Extremely comprehensive and well-presented
· RESEARCH (8): The video (4-minute) format is great compliment
· AGENCY (7): Mixed bag - but that's what makes it interesting!
· AGENCY (7/8): The mandatory films were a good addition
· AGENCY (9): Sharper
4.) How would you rate the overall quality of the written briefs that you judged?
Industry / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / TotalAgency / 1 / 6 / 8 / 9 / 29 / 15 / 2 / 70
Advertiser / 1 / 1 / 1 / 5 / 6 / 6 / 2 / 22
Media / 1 / 2 / 1 / 1 / 5
Research / 1 / 1 / 1 / 4 / 7
Industry Association / 1 / 1
Other / 1 / 1 / 2
More than 1 Industry / 1 / 2 / 2 / 1 / 6
Total / 1 / 2 / 8 / 10 / 18 / 41 / 27 / 6 / 113
What makes a great written brief? Comments:
· Insight, creativity.
· Human speak.
· Links objectives to case.
· Clarity, consistency and strong evidence.
· Simple, easy to follow and understand.
· Clarity of thought.
· Smart, concise, clear, focused.
· Insights and results.
· The last category (Hispanic) was weaker than the others- Otherwise very high quality and video format is better/improvement.
· Consistency from objectives to performance evaluation.
· First, no mistakes. Second, a real strategy.
· Context, clarity of thought, conciseness.
· Engaging tight story.
· Tight logic flow. Objectives match results cleat connection between situation, insight and creative solution.
· The objectives, goals, results woven into a compelling story.
· Interesting to read. Clear articulation of the case.
· Uninspired creative. Brevity and clarity and insight.
· Simplicity and clarity.
· Well written, clever, sets up case, insight, + real numbers.
· They still have to be streamlined and concise more.
· Brevity, focused, connective story.
· Clarity of information.
· Succinct and focused. Fewer words.
· Clear, concise, fact-based. Compelling.
· Clarity and consistent line of thinking.
· Simplicity, clarity, logic flow.
· Clear, concise, answering all of the questions.
· Solid insights. Work that pays off.
· Succinct, but enjoyable to read.
· Conciseness, lack of evasion.
· Cohesion- clear, important problem directly and powerfully solved with a big idea. Well expressed.
· Would like more insights.
· Thoughtful. Complete “thread.” Results that work from starting point.
· Clear goals and positioning.
· Some were excellent, some were poorly done.
· Clear, concise, no mistakes, A + B = C, etc.
· Inconsistent.
· Tight, focused and proofed.
· Proofread them, make them more concise, make the objectives measurable and the results meaningful.
· Consumer target, sharp, honed insight and a compelling definition of the idea.
· Strong understanding of creative community, what were objectives and link results to objectives.
· Clarity/focus.
· Simply covers all of the detail required.
· AGENCY (7): Succinct description of problem, purpose, approach & results.
· AGENCY (8): Clarity
· AGENCY (8): Concise + Ties Objectives to results
· AGENCY (7): Clear, quantifiable objectives/results with a link to the campaigns.
· AGENCY (7): Storytelling
· AGENCY (7): Clarity
· AGENCY (9): A great story, well-told.
· CLIENT (8): Concise, compelling, quantified
· RESEARCH (8): Clear, concise, storyline
· AGENCY (8): Focus + clarity - a clear story from start to end.
· AGENCY (8): Concise, dynamic
· AGENCY (7): Clear, simple w/out hyperbole
· AGENCY (7/8): Well-written, easy to navigate, no-fluff
· AGENCY (7): Great idea backed by great insight that is executed on
· AGENCY (8): Concise articulation of situation/goals + clear understanding of what makes a big idea
· AGENCY: More thoughtful
5.) How would you rate the overall quality of the creative that you judged?
Industry / 1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / TotalAgency / 1 / 3 / 7 / 6 / 12 / 12 / 22 / 7 / 1 / 71
Advertiser / 3 / 2 / 3 / 5 / 8 / 1 / 22
Media / 1 / 2 / 1 / 4
Research / 1 / 1 / 1 / 2 / 3 / 8
Industry Association / 1 / 1
Other / 1 / 1 / 2
More than 1 Industry / 2 / 2 / 2 / 6
Total / 1 / 3 / 7 / 10 / 15 / 19 / 34 / 23 / 2 / 114
Why? Comments: