I suggested this a few years ago. Was told by conference selection committees (several of them) that this was “too hard” and “too much work”. I still call BS on that rationale, and I would totally love to see this happen, but it’s not a hill I’m willing to die on.
How's this for an idea - for every proposal sent in for a conference that is rejected, the program committee gives honest feedback as to why. It would be a bit of extra work but considering you usually evaluate each session, might as well write down the comments made.
2
5
It is extremely time-consuming. You’d have to be extremely careful not to hurt people’s feelings, and some of them would start to argue. And sometimes the reason is “you’re not a good speaker”, which is not the kind of honest feedback to dispense casually
1
5
Everything you just said could apply equally to code reviews. So why do we do them, either?
1
1
Because the cost/benefit ratio is better, I guess
1
I would argue the same here.
1
Globally, yes. But my guess is poeple typically care more about their co-workers they collaborate with than a PC member does about some potential speaker. That said, OOP in Munich asks its PC members to do exactly what you’re asking for (and we do)
1
Good on you, then. And thanks. Because it IS a huge time suck, and I’ll bet the speakers rejected are better for it. So next time I see you, my friend, first round’s on me. :-)
1
Replying to @tedneward
Looking forward to it :)

Jun 11, 2018 · 7:07 AM UTC

1
Replying to @stilkov
(Uh... you may have to remind me, though. I have notoriously poor memory for remembering which round is the first. ;-) )