← Back to all modules
MODULE 5

Dual-Use Risk Assessment

Your research will be used in ways you didn't intend. The same data that supports justice can fuel discrimination. This module forces you to imagine—and prevent—the worst.

⚠️ The dual-use problem
Every finding about criminal justice has two potential uses: one that liberates and one that oppresses. You cannot control how your research is used after publication, but you CAN design mitigation strategies before release. Ignorance of potential harms is not an excuse—it's negligence.
Intended use vs. potential misuse: Carceral policy research
✅ Intended use
Advocacy for justice reform
  • Evidence for sentencing reform legislation
  • Data supporting financial inclusion policy
  • Documentation of systemic barriers
  • Legal challenges to discriminatory practices
⚠️ Potential misuse
Weaponization against affected communities
  • Algorithmic discrimination in lending
  • Predictive policing enhancements
  • Insurance redlining justification
  • Divestment from reform states
Specific weaponization scenarios
🏦
Insurance companies deny coverage based on your findings
Severity: Extreme
Likelihood: High — insurers actively mine research for risk variables
Your finding that "people with conviction histories have 2.3x higher unbanked rates" becomes an actuarial variable. Insurance companies add incarceration history to their risk models, denying auto, home, and life insurance to formerly incarcerated people—even though correlation ≠ causation.
Concrete example:
"Based on peer-reviewed research showing elevated financial instability among justice-impacted populations, we have adjusted our risk assessment model. Your application has been denied."
🤖
Predictive policing algorithms incorporate your variables
Severity: Extreme
Likelihood: Very high — policing AI vendors constantly seek new "risk factors"
Your state-level incarceration data gets disaggregated by zip code and fed into predictive policing systems. Neighborhoods with high formerly incarcerated populations get flagged as "high-risk," creating feedback loops: more policing → more arrests → "validating" the algorithm.
Concrete example:
"Our risk terrain modeling incorporates criminal justice research correlating financial exclusion with recidivism. This allows proactive deployment to areas with elevated risk profiles."
💳
Banks justify NOT serving high-incarceration areas
Severity: Extreme
Likelihood: Very high — this is algorithmic redlining
Your finding becomes banks' excuse to avoid opening branches in communities with high incarceration rates. They claim "market research shows insufficient demand" when they actually mean "these areas have too many formerly incarcerated people." Your data enables the exact exclusion you documented.
Concrete example:
"Our site selection model incorporates demographic and justice system data. Analysis indicates this market segment exhibits characteristics associated with lower banking engagement."
Required mitigation strategies
1. Non-commercial licensing
Release findings under CC BY-NC-SA license prohibiting commercial use. Require written permission for any for-profit application. This blocks insurers and predatory lenders from direct use.
2. Disaggregated data embargo
Never release individual-level or zip-code-level data. Publish only state-level aggregates. This prevents targeting of specific communities or individuals by algorithms.
3. Explicit anti-discrimination statement
Include in all publications: "Use of these findings in creditworthiness algorithms, insurance underwriting, predictive policing, or surveillance systems is expressly prohibited and constitutes misuse."
4. Simultaneous policy advocacy
Publish findings alongside policy recommendations that ban discriminatory uses. Don't just document the problem—advocate for legal protections against weaponization.
5. Community partnerships
Share findings with justice-impacted advocacy organizations BEFORE public release. Let affected communities control the narrative and timing.
6. Continuous monitoring
Set up Google Scholar alerts for citations of your work. When you see it cited in insurance, policing, or surveillance contexts, publicly condemn the misuse and contact the organization directly.
You cannot prevent all misuse, but you must try
Mitigation strategies won't stop determined bad actors. But documented mitigation attempts establish that weaponization violates your intent, creating legal and ethical accountability for misusers. More importantly: it forces YOU to confront what you're enabling before it's too late to redesign the study.
Non-negotiable requirement
Before publishing any criminal justice research, you must document every plausible weaponization scenario and your mitigation strategy for each. If you cannot mitigate a severe harm, you must either redesign the study or explicitly justify why the knowledge is worth the risk in your ethics statement. "I didn't think about it" is not acceptable.
Continue to Module 6: Missing Data Ethics →