Monthly Archives: October 2016

Taking a Collaborative Approach to Addressing Racial and Ethnic Disparities in the Justice System



Tshaka
Barrows
, deputy director of the Burns Institute, discusses his organization’s collaborative and community-centered
approach to addressing and eliminating racial and ethnic disparities in the justice system. Barrows spoke with Robert
V. Wolf, director of communications at the Center for Court Innovation, after participating in a panel on Race and
Procedural Justice at
Justice Innovations in Times of Change
on Sept. 30, 2016.

TSHAKA BARROWS: We
call it a system, but it really isn’t a system. It’s much more of a grouping of semi-autonomous agencies
that have very little accountability to each other.

ROB WOLF: Hi I’m Rob Wolf, Director of
Communications at the Center for Court Innovation and today I’m in North Haven, Connecticut at the Justice Innovation
in Times of Change conference. Sitting down with me is TShaka Barrows who is Deputy Director of the W. Haywood Burns
Institute, which works to address racial and ethnic disparities in the justice system. The institute is based in
Oakland, California. You’ve come a long way to attend the conference and participate and to sit and talk with
me, thank you very much.

BARROWS: I’m glad to be here.

WOLF: Let’s
talk about the work of the Burns Institute and in particular, how you work with jurisdictions to reduce racial and
ethnic disparities in the justice system. You have a specific approach you take to looking at this issue and trying
to address it. Maybe you could summarize for me what that approach is.

BARROWS: At the Burns Institute,
our approach is to build a collaborative of the different agencies that make up the justice system and I always tell
people, I just told the group, we call it a system, but it really isn’t a system. It’s much more of a grouping
of semi-autonomous agencies that have very little accountability to each other. The whole notion of trying to address
disparities has to be done with that context in mind because much of one agencies decision bump into the next, bump
into the next and the impact is felt by the individuals who are going through it and we see it in the disparity numbers.
To really create a strategy to address it you have to have all those key players from each of those agencies as a
part of your collaboration. We also fundamentally don’t think that just having those kind of traditional stakeholders
is enough. Our process requires that we engage meaningful participation from community stakeholders who’ve had
experience with the justice system, who live in the neighborhoods that our data shows are the target neighborhoods,
where more people are coming from, so that they can both bring that experience from having traveled through the system,
though the various agencies, being passed from one to the other, but also what it’s like living in the community
that is targeted for higher involvement for various reasons, policies, policing policies, could be that there’s
a lack of resources, any number of conditional factors.

This whole notion of creating more fair
notion of procedural justice can’t be done without accounting for that fact that certain neighborhoods are much
more highly representative in the system, our process we really aim for participation with community stakeholders,
which is very different. People are a little bit afraid of that. The idea that you’re sitting in a meeting sharing
data with people who are upset with the agency, who did not feel that they were treated fairly, who are angry about
the realities that folks in their community face, is a threatening notion for most traditional stakeholders who already
a conversation of race in this country, typically is a bit unnerving for people, it’s not like that’s a
regular practice that we have.

WOLF: And you literally bring everyone together in the same room?
That’s the process, it’s like, “Let’s all sit down together.” What does that look like,
how many people are actually sitting around a table or in a auditorium?

BARROWS: That’s a
great question. We build a collaborative and it’s a process to even build it. We don’t try to just come
in with a cookie cutter kind of prescription. We want to understand from the local players. Justice happens locally,
there’s culture. Who do they think are the key people they need to be there and how many? So sometimes we may
get huge representation from one agency, where it’s like, “You guys are kind of dominating the meeting.”
and we may need to adjust that so there’s a need to attend to the actual formation. Typically it’s, I’d
say, between 10 to 20 stakeholders depending on the size of the jurisdiction. We work in very small rural places,
they may not have a huge collaboration. I’ve worked in jurisdictions that have had up to 30 people who meet
every month, but that becomes to be a challenge in and of itself because if everybody just introduced themselves
that would take time. For us to have meaningful dialogue about certain issues in a meeting of that size, it can be
a challenge and so we really want to look for a sweet spot that allows for equal representation across the agencies
and doesn’t leave behind any one particular group.

WOLF: What happens then there, what’s
the process? You said every month, so it’s an ongoing … Are you trying to build a permanent infrastructure
for dialogue or is it a time limited, let’s meet for an x number of times to work on this?

BARROWS:
That’s another great question. Our process would be monthly, we hope as we’re setting the jurisdiction
up to maintain the process without us. We do a whole orientation to really try to help everybody to fully participate.
We don’t people just sitting there and they’re like, “I don’t know what this is, I don’t
know what’s going on,” acronyms are flying over their head. We spend time doing coach ups for the community
stakeholders and we also orient the system folks to what the meeting would be like when they have community members
there who might be more frustrated or going to ask lots of questions. Rarely, our systems stakeholder, is very good
at telling the story of their institution and how they’ve got to this point. We have also started working with
them, they’re telling a story, you’ve got to own this. You didn’t do all this, you don’t have
to apologize for the history but you need to own the fact that there were some practices that were not the best that
we were doing and we’ve been working on trying to address, because that engenders a level of respect for the
process and opens up the community to thinking that, “Okay, you really are serious about doing something different.”

WOLF: And when everyone sits down, have they already accepted the premise that there are racial and ethnic
disparities –

BARROWS: Yeah.

WOLF: Or do you also need to establish that as
the facts on the ground?

BARROWS: We will likely re-visit, lot of times people will say, “Oh
yeah, no we’ve all … We understand we have a problem.” And then it’s like, “Let’s talk
about it.” And then we start asking. “What do you think is contributing to the problem?” It’s
one thing to say, “Yeah our jail or our juvenile hall is full of people of color.” It’s another thing
to say, “And we think we have a responsibility for that, we think we’re contributing to that.” When
we ask the question, “What do you think drives this?” Everything but them usually is the response that
we get which let’s us know you probably don’t realize what this is going to feel like and you’re going
to feel like, “Well why are you guys asking us about what our decisions are?” It’s because you have
control over that, you don’t have control over external factors like Hollywood violence and movies, culture
of violence in music, or just the fact that there is this history of segregation in the country. You can’t just
undo that in your collaborative, you don’t quite have the power to say, “You know what, let’s just
change the zoning and all the ways that the neighborhoods are set up and let’s go ahead and make it so that
job discrimination doesn’t happen anymore.”

It’s like those things aren’t
really in the purview of that particular collaborative but their decision making practices are. You can control who
you violate for probation, do you send out bench warrants before actually reaching out to people in their native
language? Do you know if your court letters are landing on folks who couldn’t understand it in the first place
and so now you’re putting a warrant out for someone who never fully engaged in the information in the beginning.
So we then analyze each decision point by race, ethnicity, gender, geography, and offense. It’s a way to understand
that each decision point, what are we doing, what is the impact of our decisions, where are people going, what’s
happening?

WOLF: How do you know what the impact on all those factors you just said at each decision
point, meaning at arrest, or a decision to charge, or a decision to carry a case forward, or a decision to sentencing
or to plea. All those things are decision points right? Do you just ask people, “What do you do?” Or you’re
looking at actual hard data and numbers?

BARROWS: We first go to hard data and numbers if they
have it, often times they don’t. That is a huge problem. We’re also not researchers, this isn’t a
research project. We’re not trying to prove that our data that we’ve got is super accurate. Basically we
use what you have to try to figure out a way forward. Understanding your data might not be perfect. One issue we
see all the time is the issue of ethnicity around Latinos. Very few jurisdictions have a really great practice for
capturing Latinos within their justice system. Typically they get captured as White, so it skews the White population
up and it skews their Latino population down and it throws all the comparisons that we want to make off. There’s
a set of conditions that contribute to it because it’s an ethnic group, people speak Spanish, maybe they don’t.
There’s a lot of factors, they can be very light skin Latinos. One of the things we ask is, “Who decides?
Is it your staff? Do you ask the person directly? What’s the process for the collection of the data?”

Typically once we start to analyze it and show it back to them in meetings, we’ll start to get some
pushback, “Where did you get these numbers, what is this? This is wrong.” It’s like, “These are
your numbers, we got them from you. They may not be as accurate as they could be but this is what we have right now
so let’s get started.” We don’t want to be in the process of never ending cleaning of data, reviewing
the data, and then getting into this adaeration of the questions and throwing this, “What else do we need to
think about, what else?”, versus “I think we know enough.” There’s a tribe on our … We have
a reservation in our county and 30% of the young people in our justice system or 30% of the adults are coming from
that reservation, I think we can start there. Maybe we want a tribal affiliation and we need to go a bit deeper and
those things are helpful but that’s where we like to begin. Once we orient folks to the process, we’ll
do a history, talk about how this country started, how the justice systems are started, give everybody equal understanding
of the playing field, and then what we like to do is start actually looking at data, looking at what they have.

Like I said, we’re not a bunch of researchers. We take people’s dirty data and use it, we’re
not just going to say, ” We can’t go forward until this is pristine.” It’s like, “Well no,
this is what you have right now.” There’s tiers, so the first tier is if you think it’s not clean
enough, what do you need to do to adjust it and we can try to help with some of that but really that needs to be
owned by the jurisdiction. How do you analyze it? Is this a new practice? If it’s new, they might become defensive
when all of a sudden you’re sitting in a meeting with your peers, other agency heads, looking at data that really
shines a light on your staff’s practice in terms of making decisions and feel like, “Well wait a minute,
why is everyone looking at us?” There’s a first group to get baited scrutiny, usually it’s a little
bit raw, because this is a whole new practice. They may not even look at this data regularly internally and so there’s
not a defense in place to explain away what’s happening, there’s this kind of nervousness. That’s
a process in and of itself.

All of this takes time, none of this is fast. Our main goal is to
get to the point where we can have the group establish a target population for racial and ethnic disparities that
they want to move safely out of their system. We keep looking at the decision points, not going to pick the most
politically challenging, we’re not going to look at armed robbery, if you will. A lot of times people are not
ready to say, “Yeah, let’s move those folks out of the system safely.” We’re looking at bench
warrants, violations of probations, offenses that aren’t about overall safety at all but much more about administration
of services, but totally contribute to disparities in real ways, so you can imagine.

WOLF: So
then you get consensus and you say, “We’re going to target -“

BARROWS: We’re
going to work on these target populations.

WOLF: People who’ve violated probation or young
people or something that -“

BARROWS: We try to show it as a number per month. What I don’t
want to do is say, “Yeah, each year you have 500 violations of probation and 50% of those are Black male and
from these two neighborhoods.” Over the course of the whole year, how do you understand what your work is? What
I like to say is, “Okay, and of that per year how many is that per month? What are we actually talking about
on a monthly basis? Can we dig deeper to understand how these live?”

Now we’re looking
at each month and maybe 25, 30 people were violated. Let’s understand the nature of that, what are the probation
officer’s perspectives on this, what programs were they in? You want to then, we call it peeling back the onion,
you get down to this target. Now you want a focus group, you want to bring line staff, you want to talk to people
directly who have been in that experience and you’re looking for not just a policy change but you’re trying
to understand what kind of innovation or intervention can we come up with to move this.

WOLF:
It sounds though like it’s on a very … I don’t want to say small scale, but you have to target this group
and that group in terms of making a difference. It’s not like, “Here’s a solution.” And it ripples
throughout the whole system and disparities.

BARROWS: No, you have to monitor and track it. It’s
everything you said and you have to monitor and track – Literally we’ve come in and people said, “Yeah
we have disparities. 81% of our inmates are African American.” And it’s like, “Okay, well what else
do you know about it?” “Nothing.”

What could you do? You’re just going to
say, “Oh, let’s just release 81% of the inmates and reduce the disparity.” Nobody’s going to
do that. They get their hands tied. We have all this big picture data, annual shots, none of it helps people to know
what to do to move forward. We’ve developed a strategy and approach that really breaks it down into workable
pieces and we even have a slide that we go through that really shows people if it’s a state law and that’s
the reason why this person is locked up, you can’t change that. But if it’s a policy that you just detain
people for this because you feel strongly, well you can stop that tomorrow. That’s just an internal office policy,
that’s not a state law. Understanding how these things play out is really crucial but it takes time, it takes
that investigative work. You have to include the people who do the work on the day to day, the line staff not just
the supervisors and managers, these are people that are trying to make it work.

WOLF: I want to
ask one more question but I think it’s probably a complicated one that has a long answer. How do deal with the
issue of implicit bias? Everything that you’ve described to me is something you could see on paper and go, “Oh,
look at this number, look at this policy, you put these together and that equals a disproportionate or disparity.”
What about these things that are more intangible yet that we know impact at these decision points. Why someone, they
decide to charge someone with … Give someone a higher charge and someone not a higher charge. If there is bias
involved and it’s happening in the back of their heads and they don’t even know it, how do you address
that?

BARROWS: Well because we can do case level analysis we can could show two similar situations
and say, “Let’s talk about … How did you make this decision here? Why did you make this decision?”
And not try to label someone and say, “We’ve caught you.” We’d rather show them what they’re
doing and see if they themselves can see these patterns. We also bring in community people in the meeting you are
going to naturally see those patterns because that’s their experience. They’ll ask the question very directly
to say, “I don’t think that that makes sense.” You need that person who’s not going to play so
much by the rules to say, “Why do we do that? That doesn’t seem to make sense.” or “Why is that
in this neighborhood?”

I’ll give you an example. In one city we worked in, in a particular
area of town, any Latino kid with a marker was considered in a gang and was writing gang messages on the walls and
creating potential shootings. It was a narrative that turned into an automatic hold for any Latino kid with a Sharpie.
Somewhere there’s bias loaded into that but if you just came in the door and said, “You guys are racists
and you’re picking on Latino males,” you’re going to run into a lot of opposition. It’s another
thing to start peeling it back to say, okay, well this is some of what we’re hearing from your own staff, public
defenders, certain judges see these kids with markers and they think gang membership. Everybody kind of follows suit
but when we’ve actually looked at it, that’s not the case. And try to come at it in a way where people
are going to be able to listen and hear.

WOLF: Absolutely fascinating, sounds like you’re
doing amazing work.

BARROWS: Trying to, trying to.

WOLF: They’re very
difficult and complicated issues.

BARROWS: Yeah.

WOLF: I’ve been speaking
with Tshaka Barrows who’s the Deputy Director at the Burns Institute in Oakland, California which is working
to address and diminish and eradicate racial and ethnic disparities in the criminal justice system. Thank you so
much for taking the time to talk with me.

BARROWS: Thank you.

WOLF: I’m
Rob Wolf, Director of Communications at the Center for Court Innovation here at the Quinnipiac University School
of Law for our Justice Conference and thank you very much for listening.


The Potential for Bias in Risk-Assessment Tools: A Conversation



In this New Thinking podcast, Reuben J. Miller, assistant professor of social work at the University of Michigan,
and his research collaborator Hazelette Crosby-Robinson discuss some of the criticisms that have been leveled against
risk assessment tools. Those criticisms include placing too much emphasis on geography and criminal history, which
can distort the actual risk for clients from neighborhoods that experience an above-average presence of policing
and social services. “Geography is often a proxy for race,” Miller says. Miller and Crosby-Robinson spoke
with the Center for Court Innovation’s Director of Communications Robert V. Wolf after they participated in
a panel on the “The Risk-Needs-Responsivity Framework”  at Justice Innovation in Times of Change, a regional summit on Sept. 30, 2016
in North Haven, Conn.

Reuben J. Miller, assistant professor of social work at the University of Michigan, and his research collaborator
        Hazelette Crosby-Robinson participate in a panel at Reuben J. Miller,
assistant professor of social work at the University of Michigan, and his research collaborator Hazelette Crosby-Robinson
participate in a panel at “Justice Innovation in Times of Change,” a regional summit.

WOLF: Hi, I’m Rob Wolf, Director of Communications
at the Center for Court Innovation and today with me at the Justice Innovation in Times of Change Conference here
at the Quinnipiac School of Law in North Haven, Connecticut are two of the panelists who participated in a discussion
about risk needs assessment tools. They are Professor Reuben Miller, who is an assistant professor of social work
at the School of Social Work at the University of Michigan and his research assistant at the School of Social Work,
Hazelette Crosby-Robinson.  Thank you so much for taking the time after your panel to sit down and talk
with me.

MILLER: Thank you for having us.

WOLF: So, I wanted to
just start off talking about the risk assessment tools and some of the criticisms that have been leveled against
them because, as we heard on the panel from Sarah Fritsche, a colleague of mine at the Center for Court Innovation,
their use has exploded and they’ve been embraced as a decision-making tool in the criminal justice setting.

MILLER: Sure.

WOLF: But you raised some potential concerns about them and some of their
limitations and I wondered if you could share what some of those limitations are as you see them.

MILLER:
Sure, I’m happy to. So, Hazelette is my research associate and collaborator. She’s super modest.

So, I’d like to first preface this by saying, some scholars have suggested that we’ve really entered
an actuarial age. So it’s not just risk assessment in criminal justice, but a whole cost benefits calculus,
a whole risk calculus that’s based on actuarial models that try to predict future harm. So they try to predict,
much like an insurance company would try to predict the future risk of a car accident. In a criminal justice setting,
these risk needs assessments are trying to, one, gauge the needs of incarcerated individuals or people who have been
convicted of a crime to try to figure out where they could shore up deficits in their skill sets or in their general
stability. So for example, they might examine things like housing stability, or whether or not one was employed,
or what kinds of service needs they may have. So for example, if one has a history of substance use and abuse, that
would indicate that they need treatment or some sort of intervention based around these things.

And
at the same time, they’re trying to gauge the risk of re-offense, so the risk that they will commit a crime.
So there are a number of criticisms. The literature that engages this is fairly long. I tend to think about some
of the movers and shakers in this field, Kelly Hannah-Moffat, Bernard Harcourt, Sonia Star, Faye Taxman. Faye Taxman’s
work is actually helping us to think about important ways that we can implement risk assessment that reduce some
of the biases that are sort of baked into it, but just to talk about some of the critiques that have come from this
literature and of course my own, on the one hand there are static factors like where one lives, so geography, their
prior criminal history. These are things that they can’t avoid. And the privileging of recidivism as an indicator
of success. These are all problematic for the following reasons.

So geography is often a proxy
for race. We know that we live in a country that has a pattern of residential racial segregation. And we know that
policing and criminal justice resources of all kinds are overwhelmingly distributed in areas where poor people of
color tend to live. The problem is, people are now being arrested from, returned to, and even given programs designed
to rehabilitate them all within low income communities. Very bounded geographic districts. And so what you get is,
you get the overwhelming concentration of criminal justice resources, and you get a signaling of what that all means.
So if the substance abuse treatment house is located in a neighborhood, then that tells me that there’s substance
abusers there. Right? And so that signals narcotics forces to the community. It says something about the community.
Halfway houses are also overwhelmingly there. And so one must think about what the concentration of these things
do. So now okay, as it relates to risk. Being in a neighborhood like this triggers a higher risk score. It is indeed
one of the measures of risk, and so in that way it’s a proxy for race. Sorry, I know I’m talking quite
a bit, but –

WOLF: No, and just to kind of summarize though, or to recap what you’ve said
so far, the way risk assessment tools work, they place a high value on the location someone’s from. They place
a high value on their history with arrest.

MILLER: That’s absolutely right.

WOLF:
And so, if there’s a preponderance of enforcement there, some people are more likely to have an arrest record
or –

MILLER: The study from Stop and Frisk made this abundantly clear. That even when people aren’t
doing anything wrong they’re being overwhelmingly stopped if they’re black or Latino. And so we know that
criminal justice contact increases the likelihood that one will be arrested. And so anyway, this is a big problem
of using prior arrest records for example and even prior conviction records, so now you’ve got a bunch of arrests.
By the time you get to the prosecuting attorney, they’re going to say, Look, you’ve been arrested 14 times.
“Well, I’ve been arrested 14 times but never charged.” No, but you have a history of arrest, and so
I’m going to now charge you because I see a pattern. This is how statistical discrimination might work, or does
in fact work in practice. So now the prosecuting attorney sees a pattern. Sends it before the judge, who looks at
this pattern and interprets it to make a decision about the length of the sentence when the conviction is read, as
is a jury if it ever goes to trial. 97% of cases never go to trial, but when it goes to trial, juries are presented
with the same evidence of patterns which have more to do with where the police are concentrated than what people
are actually doing.

WOLF: So what do you say to the notion that these instruments are validated?
That they predict? This information, whether there’s a potential bias incorporated into them, they still can
predict six months to a year out whether someone is going to recommit a crime.

MILLER: Yes, with
great reliability. But it’s a population being normed against itself. And so, overwhelmingly concentrate criminal
justice resources in a particular neighborhood, which leads to more arrests, which leads to more convictions, which
leads to more imprisonment. Then I look at those who were imprisoned, and I use that to validate my measures. So
the problem, is this sort of self-fulfilling prophecy, this feedback loop, this is one problem.

Another
problem is that, and Kelly Hannah-Moffat points this out brilliantly, correlation and causation are very different
things. It’s like the standard social science response that any bench chair social scientist gives when they
look at two relationships and people use that as some sort of cause, but likelihood that particular groups of people
are more likely to commit a crime, doesn’t mean that having committed a crime in the past means you actually
will commit a crime. And so what we’re doing is, we’re treating relationship as if it’s a cause, as
if it’s a fact. And so I will sentence you now based on my assumption of your future danger to commit a crime
based on a set of assumptions that I use to justify the overwhelming concentration of police to begin with. Police
aren’t the culprits here. It’s a rationality, it’s a way to approach problems, that I think must be critically
investigated.

WOLF: And you also pointed out in your presentation that perhaps the cultural context,
the environment and the changing policy culture where for instance, marijuana arrest which were so vigorously pursued
several years ago are now considered a low priority, or they’re not even being done anymore. And yet, people
have a record of those arrests and if history of arrest is a factor, someone in the audience also questioned this,
should we drop those particular kind of arrests as a factor because we don’t care about them anymore? Do they
indicate further likelihood of going against the law or are they just something someone did because they like marijuana
and that’s it?

MILLER: That’s right. And this is a part of the rigidity of risk assessment.
This is rigidity of risk categories. So to place one in a category, you are an offender. And in Michigan, where I’ve
done a lot of research and where I’ve worked, habitual offenses … and it’s not like this in Michigan,
but it’s like this in many, many states, most states I would argue … being a habitual offender means more
time, greater risk, more punishment.

CROSBY-ROBINSON: Up to life.

MILLER: Absolutely.
So what does it mean to habituate? What am I looking at? Well, if I’m not being careful about the criminal codes,
if I’m not carefully examining what I considered a crime at a given moment in time, and adjusting my instrument
for that. Which must happen, probably, annually. If I’m not adjusting my instrument for that, if not quarterly. If
I’m not adjusting that for different understandings of what is right and wrong, then what I’ll end up doing
is habituating someone. Giving them longer sentences, giving them harsher treatment, deeper levels of punishment,
or indicating they need deeper levels of intervention.

WOLF: So tell me what recommendations you’d
make. Because you also made a point in the panel that there are some good things about risk assessment. They do take
away discretion form judges or people whose own bias might lead them to make the wrong decisions?

MILLER:
Absolutely. The benefit of risk assessment is to use it to avoid the criminal record to begin with. This one bit
of it. So if you have low risk, low leverages, as my colleague pointed out earlier today, then you are not indicated
for intervention of any kind. And it’s better to just release these folks without intervention of any kind.

WOLF: Right, and that’s what the research supports.

MILLER: The research supports
it, absolutely. So what risk assessment allows … the careful prosecutor, judge, public defender, et cetera to do
is to remove some of the discretion, because much of the decisions that are being made are based on a gut feeling.
So I am reading something in the defendant. They don’t have remorse, or they haven’t shown accountability
for their actions, or they have, as one of the panelists raised, belligerent interactions, let’s say with their
parent or the prosecuting attorney or the defendant. And my assessment is happening divorced from what it means to
actually be in court in that moment in time. How might a child, 17 years old, respond to facing 20 years in prison?
How should they respond? Should they be depressed, sad, angry, avoidant? What are our expectations in this moment.
And so risk assessment, what it allows us to do is say, Okay, let me take a step back, let me look at what actually
happened. Let me get away from my intuition, let me think about a more objective way to assess how this defendant
should be treated.

An interesting note … So here it is. We can use smart risk assessments to
think carefully and critically about how we treat offenders, what level of intervention that we lay out whether that
intervention be prison, or jail time, or a diversion program, or a treatment group. There’s no perfect way to
do this which is why constant reevaluation is necessary. You can’t settle, this is the instrument for me. You
can’t settle. It’s not the instrument for you –

CROSBY-ROBINSON: Continuous improvement.

MILLER: Continuous improvement.

WOLF: And maybe testing … if I understood what Sarah
Fritchey said, my colleague the researcher for the Center of Court Innovation, that you also can test these instruments
within certain populations and see, are they producing more negative outcomes for an African-American population?
And ask these questions that you’re asking to weed out the bias that might be built into that.

MILLER:
Absolutely. The questions that we’re raising are in some ways a set of philosophical questions but they’re
questions about the application, the use of, the embrace of, instruments to determine whether or not someone is a
future danger. Perhaps this is just the wrong approach altogether. Not the risk assessment … not that one doesn’t
need to think about ways that they can help predict behaviors of individuals. I think that’s useful in some
ways, but it certainly needs to be challenged, it needs to be questioned. What am I predicting? Who am I predicting
this for? What are the possibilities for this person once these predictions are made? These are questions that need
to be addressed.

WOLF: So, Miss Crosby-Robinson, let me ask you, as we talk about these kinds
of assessments, you bring to bear your own set of experiences with the correctional system as a researcher and you’re
own past history which you refer to on the panel as someone who had been formerly incarcerated. And I wonder what
insights that had allowed you to bring to bear to this notion? Presumably a long time ago they didn’t have these
risk assessments, I don’t know … when you were initially had your first contact with the correctional system,
the justice system. And now they do and you’ve had a lot of contact and opportunity to interview and spend time
with people are incarcerated and I wonder where you come down on this issue?

CROSBY-ROBINSON:
Well, first of all, I think it’s a good idea to have a risk assessment, as Ruben has said earlier, because it
removes some of this pressure from judges and prosecutors to make these decisions based on their own personal bias
or how they’re feeling at the time. But what it does not account for are all of the little various innuendos
that a person is going through when they come out. Family reunification can create a stressor. If somebody’s
coming out and they have to be paroled to a family address, a suitable relative for placement. So they’re coming
to this family address, but the family address that the parole officers decided that the person can parole to is
not really the best environment, and sometimes the issues that they had that led to their incarceration stem from
the family issues that they were having at the time. Or it’s not in the right environment or they don’t
have really enough support from their family. And things happen because lives are fluid and things change.

For instance, we interviewed a person who was 17 years old and she was pregnant. She had a mental illness,
she’d been in the mental health system since she was 8 years old. She lived with her grandmother. We interviewed
people three times, as soon as they were discharged and then 30 days after they’d been out. Then 60 days and
90 days. And so, following her, by the time we got to her third interview, her grandmother dies. She’s living
in her grandmother’s house. This is the only stable person she’s known in her life. Her grandmother has
raised her since 9 years old. She just had a baby, the baby isn’t even a year old yet. Now she’s 18 years
old, she has a mental illness, and she’s relying on that system to become her support now where her grandmother
was everything. Well, these are things that a risk assessment would just not pick up. Because you never know what’s
going to happen. So now what happens to this individual? We’re at the end of the time that we follow this person
for our study. But you know, the question is in our mind, what happens?

And another thing that
I find frustrating, is no matter what your risk assessment is and if you get it right or not, then when a person
who gets out into the community, whatever the risk assessment decided that they need as a support or an intervention,
there’s no community resource for that.

WOLF: The theme I’m hearing from both of you
is that these risk needs assessment tools cannot be judged or effectively used apart from the environment, whether
it’s the environment that created the measurements of risk, or the needs. Because if you can identify the needs
and say great, but if you don’t have the resources in the community it’s meaningless information.

MILLER: So, Faye Taxman has a great paper. She finds that, on the need side of things, substance abuse treatment
is indicated in about 90% of the folks who were justice involved, but the capacity to provide the treatment, either
in prison or out. Something like 25% of folks in prison were able to actively engage in regular substance abuse treatment
that needed it. And so what this does is it creates another deficiency that one might judge or regard as a part of
the risk of this individual recidivate. Did you complete programming? What was programming available, either in prison
or out?

WOLF: Well, this had been a very vigorous and interesting conversation and I really appreciate
you’re both taking the time to speak to me about your work.

MILLER: Yeah, thanks for having
us.

WOLF: So, I’ve been speaking with Professor Reuben Miller, assistant professor of social
work at the School of Social Work at the University of Michigan and his research assistant and collaborator, Hazelette
Crosby-Robinson and we’re all here today in New Haven at Quinnipiac University school of law for the Conference
of Justice Innovation in Times of Change, which is sponsored by the Center for Court Innovation and the Department
of Justice’s Bureau of Justice Assistance and hosted by Quinnipiac University. You can find out more about risk
needs assessment and criminal justice reform in general at our website, www.courtinnovation.org. I’m Rob Wolf,
thanks for listening.