There have been many questions on twitter about how the review process worked and why we didn’t accept your brilliant proposal (Roger Brown and I are one for two in our own quest this year). This little item is being written largely from memory and undoubtedly I will mis-remember a few things.
The key details:
- Two rounds of submissions: Early bird – Jan 10 and JIT – Feb 21
- 47 proposals in the first round and ~25 in the second
- 2 Stage Producers (myself and Ola) and 6 reviewers
- We tried to give all early bird submitters 3 reviews (with some exceptions)
- JIT submitters got fewer reviews
- We setup a mailing list for conversations and spreadsheet to track sessions reviews etc.
- All decisions were collaborative
- Slots: 2 x 180 min, 7 x 90 min and 6 x 60 min
- 2 1/2 months work
For the first few weeks of the first round we put most of our energy into ensuring sessions got multiple reviews. Looking back there are several sessions that didn’t get three reviews either because we dropped the ball (sorry Dave) or because the reviewers felt that everything useful had already been said.
Scoring and Round I Acceptances
Once we had enough reviews, the team was invited to score sessions on a scale of 1 – 5. 1 – I will block this session, 3 – pretty good, 5 – I will champion. We used the average of the scores to do produce an initial ranking of the sessions. We also used the Standard Deviation to spot cases where there was real disagreement between reviewers.
This ranking was used only to start debate, just like a real Product Backlog human judgment trumps a scoring mechanism.
At the end of the first round we accepted 3 x 90 min and 3 x 60 min. To be fair to the sessions that didn’t have a chance of being accepted we also rejected the lowest ranking 20 sessions. (Again after much discussion). Due to processing issues it took a while for the acceptances to go out.
Even though we had fewer submissions this round was a lot harder, mostly because there was a lull after the first round but also because we seemed to burn out some of the review team. We used roughly the same process although this time not every session received 3 reviews. If the team felt the first reviewer had covered the key details then one review is all the session got. Scoring, Ranking etc worked the same they did in round I.
If we rejected your session it doesn’t mean it was bad or there was anything you could do it improve it. It likely means that it just didn’t fit in the mix we wanted for our stage, or that we already had one session that covered that type. This is exactly Roger’s and my experience: Neuroscience of Leadership – great reviews with several people telling me privately they loved it. But at the end of the day Mitch and co. didn’t have room for it. As I look back we had ~25 sessions that would have been great on our stage
Next time I would invite more reviewers and also make clear the time commitment involved, so that everyone understood from the start just how much work is required (30-40 hrs). This way all sessions would get 3 reviews.
For the conference committee:
- Replace one of the 180 min sessions with 2 x 90 min
- Make reviews public unless the reviewer or reviewee make it private
- Find a way to avoid the lull between the rounds. Or do something else to reduce the length of time we were committed 2 1/2 months for our group.
- Get early acceptance/rejects out quickly
However I won’t as some people have suggested make our scoring and conversations around the sessions public. If we do that we will suffer politically correct scores (i.e. I will give him a 4 because he’s a friend) and there won’t be real discussion about the sessions.
Benefits to early submission:
- More feedback
- More time to improve
- If you were rejected you knew to move your time and energy elsewhere
- You got two chances at acceptance
My job primarily consisted of reminding people that we needed reviews, score sessions and keep our process moving along. All up I spent ~70 hrs keeping this stage moving along. I really appreciated working with a great review team who put in alot of time. Lets do it again next year.
Also see: Thinking Tools for Conference Reviewers
Image Attribution: Agile 2011 Conference