A growing number of academic and collecting institutions are crowdsourcing to create and enhance online collections and resources more cost-effectively, engage the wider community, and enable research. Online volunteers are assisting with a wide range of tasks such as tagging, identification, proofreading, transcription, text encoding, translation, and contextualisation.
For those of us in the early stages of crowdsourcing project development, and others investigating this approach, there are many questions to be addressed:
- What are the projects that serve as precedents?
- What are the risks and advantages of crowdsourcing the task?
- Who is ‘the crowd’ and what are the benefits to the volunteer?
- What are the resources required, and how long will it take to achieve our objective?
- Does an appropriate crowdsourcing tool exist, or do we need to build a custom solution?
- How do we optimize the system and website for participation?
- What level of volunteer support and moderation is needed for quality control?
- What metrics should we use to evaluate the project?
I’m keen to talk shop with folks interested in these questions and others.