9. AN AGENDA FOR RESEARCH AND EXPERIMENTATION

Finally, we highlight valuable directions for further research and experimentation in crowdlaw.

This is a draft version of the report (dated October 12, 2017) and will be updated in November.

By exploiting technology to engage a broader and more diverse constituency in the process of proposing, drafting, editing and informing legislation, crowdlaw has the potential to (1) enhance the effectiveness, legitimacy, and accountability of lawmaking practices and to (2) transform fundamentally the source of authority undergirding the legislative process. Three broad hypotheses have informed our work:

● Lawmaking that is participatory is more effective because it brings more diverse ideas and information to bear.

● Lawmaking that is participatory is more legitimate because it engages broader groups of participants.

● Lawmaking that is participatory is more accountable because it subjects the process of crafting laws and regulations to greater scrutiny.

Yet as much as government of, by and for the people is an aspiration in a democracy and in every strand of participatory democratic theory, we have very little understanding of the actual impact of crowdlaw because tech-enabled engagement in parliamentary procedure is so new. More specifically, we lack empirical evidence of how changes in process affect outcomes of the engagement and of how to design and use crowdlaw in order to enhance rather than denigrate the legislative process. More research is both feasible and needed, especially given the ability to run controlled trials by modifying the platforms used to run engagement processes.

Thus, in this section, we offer a sketch of a future research agenda on crowdlaw, some discussion of the methods for studying crowdlaw, and a suggested role that the Open Assembly Lab could play in supporting the research necessary to understand and evolve Spain’s crowdlaw policies, platforms, and practices.

Crowdsourcing for public and administrative decision-making

Crowdlaw, of course, is based on the term crowdsourcing, coined in 2006.¹ Generally, crowdsourcing is the outsourcing of a function usually performed by employees of an organization to a “crowd” (people outside the organization) by means of an open call. There is a growing literature on crowdsourcing, open innovation, and the use of technology to enable group work. Scholars such as Karim Lakhani,² Kevin Boudreau,³ Henry Chesbrough,⁴ and John Prpic⁵ and write about the role of the crowd in enabling business innovation. In its application to business and science, particularly in the management and social psychology literature, crowdsourcing has been shown to have a demonstrable effect on problem-solving capacity, as well as on the speed, accuracy, and diversity of ideas generated.

There are various styles of crowdsourcing, including challenges and contests, which articulate a problem, solicit many solutions and pick a winner among them. Such contests work well when it is not obvious what combination of skills or even which technical approach yield the best solution for a problem; such was the case in the TopCoder Immunogenics Challenge, which yielded 89 novel computational solutions to the stated problem in two weeks. 30 of those submissions exceeded the benchmark performance of the US National Institutes of Health and none were from academic or industrial computational biologists. But beyond spurring greater innovation through competition, crowdsourcing can also involve coordinating collaboration on a shared product such as Wikipedia, where the goal is to scale the number of people contributing. Finally, in addition to competition and collaboration, crowdsourcing can refer to asking a group to solve a problem to which many additive solutions are needed, such as the creation of multiple apps.

Now crowdsourcing is becoming part and parcel of standard practices in the public sector, too. Preliminary research shows that crowdsourcing, because it expands the number and diversity of problem-solvers, is also leading to positive outcomes in administrative decision-making including the uses of crowdsourced information to improve the examination of patents, of crowdsourced problem-solving to tackle difficult questions posed by upwards of 750 federal agencies, and of crowdsourcing of policy ideas. For a robust, succinct review of such efforts we recommend Helen Liu’s “Crowdsourcing Government: Lessons from Multiple Disciplines.”⁶

Other scholars have since written extensively about technology-enabled engagement, most notably: Hélène Landemore,⁷ Daren Brabham,⁸ ⁹ Tanja Aitamurto,¹⁰ and Ines Mergel. The study of tech-enabled public engagement as it applies to lawmaking and the work of legislatures, however, is only in its infancy. Most notably, Cristiano Ferri produced an extensive monograph addressing the interaction between contemporary democratic tradition, technological innovation, and citizen participation.¹¹ The scholarship also includes Landemore and Aitamurto’s survey of the crowdsourcing of an off-road law in Finland, providing a key assessment of participant motivations and impressions.¹²

Improving coordination and methods in crowdlaw research

In order to make sense of the evolving field of crowdlaw we need, as MIT professor Tom Malone et al. say about online collaboration generally, to “map the genome” of public participation in lawmaking practices.¹³ That is to say, research is needed to catalog and organize systematically the different components of participatory lawmaking practices according to a common taxonomy that can be used to study them (in the same way as open innovation researchers have done for the study of crowdsourcing in business or that social psychologists have done when describing different forms of group work).¹⁴

Given, first, the traditionally deep distrust of groups endemic to the social psychology literature on “groupthink,” which condemns the presumed tendency of groups to drift to extreme positions,¹⁵ and second, the heretofore fairly poor design of engagement processes such as electronic petitions,¹⁶ it is not self-evident that participatory lawmaking practices lead to improvement. Rather, there is a need to study them and assess whether and under what circumstances crowdlaw impacts the lawmaking process. In this report, we have used a six-factor framework for organizing and describing the case studies, but the taxonomy needs to be expanded and deepened.

To review, our case studies are organized by (1) the task the crowd is asked to perform (e.g., comment or draft), (2) the method (e.g., participatory budgeting or consensus council), (3) the stage of the lawmaking process (e.g., agenda-setting, monitoring), (4) the tech platform (e.g., mobile or web), (5) the legal framework (e.g., institutionalized participation or ad hoc) and (6) the evaluation of impact (e.g., formal evaluations, if any).

As we expand our analysis of available cases and conduct more in-depth research, we would advocate looking at eleven factors in much greater detail. These would give us a more granular way to understand crowdsourcing practices and to study them. This taxonomy requires further consultation and deliberation with scholars and practitioners to refine.

We would propose looking at the following 11 attributes of crowdlaw:

  1. Ownership: We hypothesize that projects run and controlled by the parliament itself have better outcomes because they are integrated into the workflow of the legislature. So, we want to code each example based on who runs the process. Crowdlaw has been practiced by traditional legislatures but also by political parties and by activist groups, wanting to build a base of support for a particular piece of legislation.
  2. Audience: We hypothesize that, in the absence of active steps to invite participation from diverse audiences, participation will be largely male and upper middle-class. Thus we need to develop a way of describing the demographics and other attributes such as expertise of crowdlaw participants. This builds on earlier work done to ‘unmask the crowd’ studied a crowdsourced law-reform initiative in Finland and found that it mostly involved educated professional males.¹⁷ A study of female participants on Change.org found that while female participation was higher than expected in “thin participation” (e.g. signing petitions) but underrepresented in “thick participation” (e.g. petition creation).¹⁸
  3. Incentives: There is a great deal of social psychology and management literature on the relative value of extrinsic versus intrinsic incentives as a motivator for participating in online communities, generally, but nothing specific to the legislative process.¹⁹ ²⁰ ²¹ Thus, we want to know what are the most effective incentives to entice the public to engage in participatory lawmaking. To design participatory governing processes for the digital age, researchers must dig into the age-old question of human motivation. We hypothesize that clearly defined rules of procedure (guidance), an understanding of the relevance of one’s participation to the ultimate outcome (relevance), and the ability to make a difference (impact) are primary motivators for repeated engagement.
  4. Task: What is the participatory task? In some cases, the participating public is asked to propose legislation and in others to help with drafting. In other cases, legislation is written by professional staff by commented on and edited by the public. There no common understanding of the impact of task-type in a legislative crowdsourcing context. We need to understand which of these practices work better than others and the hallmarks of success and failure.
  5. Law type: What is the type of law being produced? There are new participatory experiments involving the crafting of regulations, legislation, and constitutions, all of which have the binding force of law. We want to understand the impact of the type and political status of the law, such as comparing participatory constitution drafting with participatory legislating. We should easily be able to flesh out the taxonomy to describe different types of lawmaking as well as to code for who introduced the legislation. It will be key to understand whether, when it comes to other factors such as task or audience or incentive, participatory constitution drafting holds much in common with participatory legislating.
  6. Topic: What is the subject matter of the law being drafted? In many cases, crowdlaw processes are adopted in connection with the formulation of laws proposed by the executive and in others by those proposed by the legislative. Some are controversial bills and others quite apolitical. We can easily assess the level and quality of engagement when bills are highly contested and polarizing versus when they are not.
  7. Feedback: What feedback is provided to participants? To understand the role that feedback plays by looking at whether and how the parliament provided feedback and the impact of such communication on whether people participate and whether they return.²² Some crowdlaw processes have the public making contributions without a response from the institution; others involve generalized responses, and others specific feedback. The goal is to track what is taking place and which systems seem to create more incentives to join and to return, with the hypothesis that more government response and interaction will increase participation and frequency.
  8. Platform: Tracking who is using what kind of platform, from web-based to SMS-based. We can also interview platform owners and designers to learn more about why, considering the diverse open free tools for crowdsourcing available, do some organizations prefer developing their own tool? What are the most common/effective crowdlaw’ tools available? Are they based in open-source or proprietary technology? What kind of interactions have been used in crowdlaw experiments, and what results they bring about?
  9. Legislative stage: Tracking the stage of the lawmaking process at which engagement is sought. At present, we know that most crowdlaw is taking place at the proposal-making or drafting stage. But, as more projects come online, are they occurring at other stages of the legislative lifecycle, such as monitoring or evaluation and which practices attract more and more robust participation. Although most crowdlaw practices today involve commenting on drafts, we hypothesize that unexplored territory — namely, using the public to monitor and evaluate the impact of legislation and contribute information to developing legislative solutions prior to drafting — are likely to be robust areas of opportunity.
  10. Timing: How long was the opportunity to participate? What is the impact of shorter versus longer participation timeframes at different stages of the lawmaking process? For example, does having too long depress participation or does having too little time increase frustration? Also, should participation be divided up into multiple phases? For example, the Ministry of the Environment and the Committee for the Future in the Finnish Parliament initiated a crowdsourced off-road traffic law reform in Finland in 2013. About 700 Finns participated in the law-reform process online by sharing their ideas, knowledge and perspectives about off-road traffic. The participants shared about 500 ideas, 4,000 comments and 19,000 votes in the crowdsourcing process. The process was divided into three phases.²³ More work is needed to compare single versus multi-stage processes.
  11. Training: What is the impact of training? What is the impact of framing the issue of engagement prior to participation? Does providing a short tutorial on a topic increase the quality of public inputs? How can training accommodate varying learning processes and abilities? Training should not only be a consideration for citizens, but for public officials too, as they may need context to understand a crowdlaw platform or how best to make use of the crowdlaw initiative in their work. We hypothesize that training prior to engagement increases the quality of participation and usefulness of inputs received from the public.

An expanded research project will create an evidence base that can help us to understand the design elements of a crowdlaw process and to draw generalizable conclusions about when, where, and which practices produce results in line with the initiators’ goals. These research results could enable legislatures to decide which forms of crowdlaw to adopt and scale.

We want to understand how to design an effective crowdlaw process and, at the same time, generate empirical insights to inform reflection about the impact of engagement on the legitimacy of lawmaking. There is always the risk that engagement exercises are mere “democracy theatre” that are employed to make institutions appear more legitimate. These kinds of participation are to real engagement as “Kabuki theatre is to human passions,” writes the former general counsel of the Environmental Protection Agency (EPA), E. Donald Elliott. They are “a highly stylized process for displaying in a formal way the essence of something which in real life takes place in other venues.”²⁴

This so-called “crowd-washing” can be dispelled by generating meaningful insights about the effects of crowdlaw on both institutions and individuals. Do participants learn about lawmaking? Do they change their political views? Does it enhance participants’ trust in politics and in government? Does it enhance public awareness of topics involved in policy discussions? Are there harmful results? Similar questions need to be asked and examined from an institutional perspective, inquiring whether institutions and those who work for them view participation as helping the effectiveness and efficiency of the system. In addition to such qualitative measures, we can, over the long run, also study the effectiveness of legislation created using crowdlaw and determine its value, using such measures as whether it was more or less subject to litigation and judicial review, whether it was eventually amended, and whether it, in fact, achieved its stated goals more effectively.

Methods for Studying Crowdlaw: Research in the Wild

To advance research into how real-world institutions such as legislatures could use technology to engage with the public, we need to accelerate the design and execution of experiments that will help us to understand whether, in fact, obtaining diverse public input through the Internet improves the legitimacy and efficacy of governing processes. Even more powerful forms of evaluation are possible, because these systems are run largely on digital tech-based platforms. Thus, it is possible to design experiments and instrument crowdlaw software to construct controlled trials. Technology makes is easier to accelerate the speed and scale of empirical observations and data collection.

We lack empirical evidence of how changes in process affect outcomes of the engagement and of how to design and use crowdlaw in order to enhance rather than denigrate the legislative process. Randomized control trials (RCTs) will shed empirical insight on how to design crowdlaw processes, practices, and policies. Scientific analytical methods from across a variety of fields, including the social sciences (especially what is called “crowd science”), data science, and systems modeling, will be used to draw insight from the collected data.

To understand how RCTs might be used in connection with crowdlaw, take a few examples. It is conceivable to have a platform that randomizes people into participation opportunities at different stages of the lawmaking process. Thus, I might be given the chance to comment on one law but elsewhere be invited to monitor implementation of the law. We could imagine testing different prompts by randomizing public participants into two groups — one that is encouraged to participate for the good of the country and another for the chance of winning a prize in an effort to understand incentives better. Take, as a third example, the question of the role that feedback plays in creating incentives to participate. We could easily imagine constructing an experiment whereby half the participants receive a reply from the parliament about how their feedback was used and half do not in an effort to measure whether such participants are more or less likely to participate again.

Previously spurned by the academic and public sector as potentially reckless, testing crowdlaw interventions using RCTs and other experimental designs, when done well and with ethical sensitivity, can help to forestall bad designs, wasted taxpayer dollars, and, perhaps worst of all, greater frustration and distrust of government.

There is sufficient innovation taking place around the world to enable more natural experiments during which researchers observe the differences between naturally occurring crowdlaw projects in different jurisdictions. Depth of participation differs between crowdlaw projects, by design. For example, We the Citizens in Ireland, Participatory Decentralization in Montevideo, and the forthcoming Citizens Assemblies on Brexit in the UK all select citizens to partake in consultations, whereas other systems like GovTogetherBC in Canada, Barcelona Decidim, and Better Reykjavik have open calls to citizens. This diversity allows comparisons across projects. Of course, given the multivariate nature of crowdlaw, causation cannot be inferred with certainty in such cases. But they provide useful real-life case studies that avoid the challenges of simulating complex forms of engagement.

The Open Government Partnership and the National Democratic Institute are working across the Western Hemisphere to promote legislative transparency and openness, giving parliaments the tools, education, and support to do so.²⁵ OGP has generated 2800 commitments across 159 National Action Plans, which include promoting legislative openness and, increasingly, citizen engagement by parliaments.²⁶ Fundación Ciudadano Inteligente is managing ten transparency, accountability, and participation platforms across Chile and Brazil, including a platform under development that will enable participation in policy implementation in the local government context.²⁷ Additionally, organizations such as Directorio Legislativo are monitoring debates and information flows across 18 countries in Latin America, using consensus-building and partnership with advocacy organizations to increase data available on topics being legislated.

With tech-based engagement such as online engagement platforms, it also becomes easier to undertake qualitative experiments, including the dissemination of surveys and questionnaires to participants pre- and post-engagement to inquire about their motivations or to test their level of political and civic knowledge pre- and post-participation. It is also possible to inquire of participants who sign up but never participate or to ask questions of those who are more or less active participants.

As crowdlaw initiatives proliferate, it will become faster and easier to replicate these experiments at greater scale and frequency.

The Role of the Open Assembly Lab

To institutionalize crowdlaw in practice requires a parallel effort to undertake mixed-methods research to learn what works at each stage of the lawmaking process. Thus, we can envision, for example, testing myriad questions in practice as the Assembly begins to roll out new crowdlaw mechanisms. Effectively undertaking such research will require collaboration between the academy and government, to design experiments and implement them in practice.

To advance research on crowdlaw and, in turn, assess and evolve its own crowdlaw practices, the Open Assembly Lab should therefore:

1) Create a global research advisory network to work with multidisciplinary researchers from law, political science, computer science, human-computer interaction, sociology, and other relevant fields to design ethical and implementable experiments in conjunction with the roll-out of crowdlaw practices.

2) Establish a data collection mechanism for accumulating the data thrown off by citizen engagement processes and, subject to privacy guarantees, open up that data to the research community to study.

3) Require all researchers using this data to, in turn, share their own data and results and make their methods transparent.

4) Work with the advisory network to establish data standards and a data dictionary to ensure that the resulting data can be compared.

5) Reach out to practitioners in other jurisdictions to encourage similar data standardization and sharing efforts and to catalyze research experiments across jurisdictions.

6) Create a research fellowship or grant program and invite proposals from those beyond the advisory network who wish to undertake advanced research on crowdlaw. Such opportunities should, in particular, target younger, more diverse, and interdisciplinary researchers and support collaboration between researchers and parliamentary staff.

7) Hire staff for the Assembly with training in research methods and train all staff in how and when to use RCTs and other experimental design methods so as to create a sensibility for and awareness of the value of research. Where academics cannot be brought into government, however, the Assembly should push questions and accompanying data out to them in the field.

8) Report data and relevant aggregate statistics from experiments at the Lab and other institutions to the Assembly and the Spanish public.

9) Develop and publish ethical guidelines for conducting research involving public participants. Rules on ethical but efficient administration of research need to be clarified.

10) Disclose all information collection and conduct research consistent with European and Spanish law and human rights values in order that participants know when they are participating in a research experiment.

In addition to its value for how we design democratic institutions, crowdlaw research will advance scholarship in legal academy by addressing the impact of technology on legislative processes. Second, by advancing our understanding of how and why groups collaborate online, crowdlaw will also represent a contribution to the empirical social sciences. This work is urgently needed because we know that crowdlaw practices, in many cases, do not seem to be working well and lack established criteria for evaluation. Thus, crowdsourcing in the legislative arena is a ripe and important area for research with the potential to advance and build a field of study and, at the same time, have contemporaneous impact for public institutions.
 
 Because of the potential to help institutions innovate at a time when the world is desperate for a re-imagining of democratic mechanisms, crowdlaw has the potential to transform lawmaking radically by injecting more and more diverse sources of ideas, information, and expertise into the lawmaking process at every stage. With rates of trust in government at all-time lows, the weakening legitimacy of traditional representative models of lawmaking, typically dominated by political party agendas and conducted by professional staff and politicians working behind closed doors, is called into question. There is frequent critique of the absence of democratic legitimacy in the lawmaking process, a concern which only grows with the delegation of power to unelected agencies to craft the rules that implement legislation.,
 
 In the face of increasingly complex challenges, rapid social change and technological innovation, governments must find new ways to do more with less, innovating in how they work. Thus, it is not enough to experiment with new policies in the laboratory of democracy if we use the same beakers. We need to change the processes by which we make policy and deliver services for the public good. The explosion of crowdlaw initiatives has already created the opportunity for “when.” Now, empirical yet agile research in the wild is the route to knowing “how.”

- Gabriella Capone and Beth Noveck


Next posts in the series:

Or go to the first post in the series and table of contents here.


Footnotes

¹ Jeff Howe, “The Rise of Crowdsourcing,” Wired Magazine, June 1, 2006, 1–4. Available at: https://www.wired.com/2006/06/crowds/

² Andrea Blasco, Olivia S. Jung, Karim R. Lakhani and Michael Menietti, “Motivating Effort in Contributing to Public Goods Inside Organizations: Field Experimental Evidence” (Working Paper №22189, National Bureau of Economic Research, April 2016). Available at: http://www.nber.org/papers/w22189

³ Kevin J Boudreau and Karim R. Lakhani, “Using the Crowd as an Innovation Partner,” Harvard Business Review, 91:4 (2013): 61–69. Available at: https://hbr.org/2013/04/using-the-crowd-as-an-innovation-partner

⁴ Henry Chesbrough, Open Innovation: The New Imperative for Creating and Profiting from Technology, (Boston, MA: Harvard Business School Press, 2003).

⁵ John Prpić, Prashant P. Shukla, Jan H. Kietzmann, and Ian P. McCarthy, “How to Work a Crowd: Developing Crowd Capital through Crowdsourcing,” Business Horizons 58:1, 2015: 77–85. Available at: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=yiueCGUAAAAJ&citation_for_view=yiueCGUAAAAJ:vV6vV6tmYwMC

⁶ Helen K. Liu, “Crowdsourcing Government: Lessons from Multiple Disciplines,” Public Administration Review (July 2017).

⁷ Hélène Landemore, “Inclusive Constitution‐Making: The Icelandic Experiment,” Journal of Political Philosophy 23:2 (June, 2016): 166–191. Available at: https://www.researchgate.net/profile/Helene_Landemore/publication/264715817_Inclusive_Constitution-Making_The_Icelandic_Experiment/links/56ab7e4708ae8f386569b7d2.pdf

⁸ Daren C. Brabham, “Motivations for Participation in a Crowdsourcing Application to Improve Public Engagement in Transit Planning,” Journal of Applied Communication Research 40:3: 307–28.

⁹ Daren C. Brabham, Crowdsourcing in the Public Sector (Washington, DC: Georgetown University Press, 2015).

¹⁰ Tanja Aitamurto, “Crowdsourcing for Democracy: A New Era in Policy-Making,” Parliament of Finland, Committee for the Future (2012). Available at: http://cddrl.fsi.stanford.edu/publications/crowdsourcing_for_democracy_new_era_in_policymaking

¹¹ Cristiano Ferri Soares de Faria, The Open Parliament in the Age of the Internet: Can the People Now Collaborate with Legislatures in Lawmaking? (Brasília: Câmara dos Deputados Edições Câmara, 2013). Available at: http://bd.camara.gov.br/bd/handle/bdcamara/12756.

¹² Tanja Aitamurto and Hélène Landemore, “Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland,” Policy & Internet 8:2 (June 2015): 174–196. Available at: http://onlinelibrary.wiley.com/doi/10.1002/poi3.115/abstract

¹³ Thomas W. Malone, Robert Laubacher, and Chrysanthos Dellarocas, “Harnessing Crowds:Mapping the Genome of Collective Intelligence,” (MIT Center for Collective Intelligence Working Paper, 2009).

¹⁴ Richard Hackman, “A Normative Model of Work Team Effectiveness,” (Yale School of Management Research Program on Group Effectiveness, Technical Report #2, November 1983).

¹⁵ Richard Hackman and Nancy Katz, “Group Behavior and Performance,” Handbook of Social Psychology, Volume 2, (Wiley: Hoboken, NJ), 2010: 1208–1252.

¹⁶ Beth Simone Noveck, Smarter Citizens, Smarter State: The Technologies of Expertise and the Future of Governing, (Harvard University Press: Boston, MA), 2015: 75–77.

¹⁷ Tanja Aitamurto, Hélène Landemore and Jorge Saldivar Galli, “Unmasking the crowd: participants’ motivation factors, expectations, and profile in a Crowdsourced law reform” Information, Communication & Society 20:8 (2017): 1239–1260. Available at: http://www.tandfonline.com/doi/full/10.1080/1369118X.2016.1228993

¹⁸ Jonathan Mellon, Hollie Russon Gilman, Fredrik M. Sjoberg and Tiago Peixoto, “Gender and Political Mobilization Online: Participation and Policy Success on a Global Petitioning Platform,” Harvard Kennedy School Ash Center Occasional Papers (July 2017): 1–49. Available at: https://ash.harvard.edu/files/ash/files/gender_and_political_mobilization_online.pdf

¹⁹ Lena Mamykina, Bella Manoim, Manas Mittal, George Hripcsak and Björn Hartmann, Design Lessons from the Fastest Q&A Site in the West” (paper presented at Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Proceedings, Vancouver, BC, Canada, May 7–12, 2011). Available at: https://people.eecs.berkeley.edu/~bjoern/papers/mamykina-stackoverflow-chi2011.pdf

²⁰ Kevin J. Boudreau, Nicola Lacetera and Karim R. Lakhani, “Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis,” Management Science 57:5 (May 2011): 843–863. Available at: http://www.hbs.edu/faculty/Pages/item.aspx?num=39248

²¹ Al Mamunur Rashid, Kimberly Ling, Regina D Tassone, Paul Resnick, Robert Kraut and John Riedl, “Motivating Participation by Displaying the Value of Contribution” (paper presented at Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Montréal, Québec, Canada, April 22–28, 2006. Available at: http://presnick.people.si.umich.edu/papers/CHI06/rashidAl.pdf

²² Cliff Lampe and Erik Johnston, “Follow the (Slash)Dot: Effects of Feedback on New Members in an Online Community” (paper presented at Proceedings of the International ACM SIGgroup Conference on Supporting Group Work Proceedings, New York, NY, November 6–9, 2005). Available at: http://students.lti.cs.cmu.edu/11899/files/cp3a-p11-lampe.pdf

²³ Tanja Aitamurto, Hélène Landemore, David Lee and Ashish Goel, “Seven Lessons from the Crowdsourced Law Reform in Finland,” The Governance Lab, October 30, 2013. Available at: http://thegovlab.org/seven-lessons-from-the-crowdsourced-law-reform-in-finland/

²⁴ E. Donald Elliott, “Re- Inventing Rulemaking,” Duke Law Journal 41(1992): 1490, 1492

²⁵ “Legislative Openness,” The Open Governance Partnership, 2017, accessed July 26, 2017, https://www.opengovpartnership.org/about/working-groups/legislative-openness-0

²⁶ “OGP Process Step 2: Develop an Action Plan,” The Open Governance Partnership, January 2016, accessed July 26, 2017, https://www.opengovpartnership.org/resources/ogp-process-step-2-develop-action-plan

²⁷ Conversation with Pablo Collada, July 2017.