Press "Enter" to skip to content

Session 3: Planning the Study: ELSI Issues in Study Design and Governance – John Wilbanks

John Wilbanks:
So, I’m going to cover a slightly different topic. Feel free to look us up at
We’re a nonprofit based in Seattle at the Fred Hutchinson Cancer Center, and what we
do is we try to connect groups of people who have data and would like to share data with
groups of people who like to analyze data. And so, we have a set of different systems
inside the organization, ranging from a data use and version control platform, for the
sort of people who really like to analyze data from command prompt. We have about 8,000
regular users there, and we’ve just started branching into actually running observational
studies through mobile devices. Each of which — the first two have been approved to enroll
up to 100,000 people, and so we’ve had to develop a whole set of other governance capacities
that let us enroll people in a way that we think is informing. So, I’m going to focus
on that piece and we can talk about some of the other stuff on the panel. So, if you’re going to enroll 100,000 people
in a study that is only mediated through the mobile device, simply rendering the consent
document as a PDF is not a very informing choice to make. So, Sally alluded to interaction
in persona-based design, and that’s actually what we spent most of 2014 doing. We did about
35 interviews in conjunction with Academy Health and the Electronic Data Methods Forum
with a variety of stakeholders and did persona creation to see, “What are the kinds of
things we can do to make informed consent more of an informing process than just a static
document given that we’re not going to have the experience of the participant talking
to a clinician and having them hand them a form?” And what we came up with was very
much inspired by the way software design works; it was to think about, “What’s a process
that you enroll someone over the course of, using their phone, that actually informs them
enough about the study to make some choices?” And so, the way that it works is, you download
an app on your phone. One of the studies is in Parkinson’s symptom variation; the other
is in post-chemotherapy cognitive impact, and one of the key ideas from the interview
was, “Well, if it’s on my phone, I don’t want to just swipe right or tap on everything.”
There needs to be something that brings the experience of the study back to the participant
before they enroll in the study, so one of the things we do is we record sensor data
on the phone during very specific tasks. All right, so if you have Parkinson’s disease,
you have hand tremor; you have gait disabilities. Well, we have an enormously powerful way to
measure that by saying we’re going to send you a notification and ask you to walk 20
feet in one direction and walk twenty feet back, and we record the accelerometers and
the gyroscopes on the phone to get a quantitative measure of your gait. We similarly ask you
to say, “Ahhh” into the microphone for 10 seconds at a time to get the muscle tone
of your voice box. Now over time, over the course of the year, which is the timeline
for the study, this gives us an incredibly quantitative picture of the progression of
the Parkinson’s symptoms. We’re also doing surveys, right? So, the
idea was you, during you’re actual consent on the phone — and we’re not going to assume
that everyone understands that there’s censors on their phone that can be recorded this way,
so you have to shake the phone at that point in the consent process to activate the censors
rather than tap and continue to move forward, all right? — we have an animation that is
icon based that shows you, as you’re going through a survey, that you don’t have to
answer all the questions if you don’t feel comfortable. And we came up with about 11
core screens that were generic to informed consent, in our opinion, for a low-risk, observational
study on the phone. We use the same exact screens for two very different studies. One
is, you know, the post-chemo cognitive impact; one is Parkinson’s, but the interface layer
is consistent because they’re very similar studies structurally. And, indeed, we’ve
now been able to work with three other institutions who are designing similar style apps. to say
— there’s actually a chance for convergence at the interface layer for informed consent
in a way that makes it more informing, in many ways, than even talking to a clinician
because clinicians are often in a hurry and just hands you a form and says, “Sign this
or you’re not in the study.” What we’re hoping comes out of this — and
we’ve released all of these things on our website as an open-sourced tool-kit. I mean,
it’s just a methodology. It’s just a language that uses iconography instead of text to communicate,
and it thinks about the process that you move through, so it’s not something that we want
to proprietize — but what’s out of this is, from the geek’s perspective, if you
start to stabilize on a set of icons and screens — as these are sort of a baseline, 11-screen
consent interface — first of all, that makes it easier to do inoperability analysis because
you can start to see, structurally, if someone inserts a weird piece of information or requirement
that hurts later integration of the data, but it also gives the patient communities
a vocabulary for what kinds of studies they want to be involved in, and that’s a vocabulary
that’s been very difficult to create using traditional legal text methods. If you can
actually say, “These icons,” right? “These restrictions or freedoms or kinds of data
that we are comfortable with, this is what we want as a community” — it can be a very
powerful way to start signaling out to researchers, so you can get some of that matching done
ahead of time. And one of the reasons that I wanted to bring
this up was Jason’s point earlier about failed relationships being sort of something
that are really important to anticipate, and we’ve had — I see. We’ve had a tough
relationship with a patient advocacy group, and the problem was that we were going too
slow, right? We were developing sort of on a research timeline, you know, going through
a design process in a way to build something we thought would be much more effective for
them. But it took so long that they just got really angry. Really, really emotionally angry
with us, and we didn’t hear that until so late in the process that it was really hard
to pull it out. So, my hope is that there’s real benefit to treating consent as a process,
not sort of a static moment where you say yes or no to a document, but the reality is,
when you design a really large study, you have to have that cohort be analyzable; the
data need to be distributable so you can verify the sorts of predictive models that get built
on it, but you can start to create signals so that someone who is going through the process
understands that it matches their individual values, and you can create a vocabulary so
that communities can declare their values in advance. I think that’s going to help
build relationships that fit because the sorts of uses that we make of data at Sage really
require fairly broad, long-term consent because we want to build predictive models, file those
predictive models with publishers, and have the data that led to the conclusions that
were published be replicatable and re-analyzable. So, it’s really hard to offer people the
opportunity to come back and delete their data or delete their consent and be consistent
with the scientific process that we’re taking forward. So, it’s really important for us
to be able to connect with communities and know at the beginning that the values that
they have match with the scientific processes that we have because there’s implicit values
buried in the ways that certain types of science get done, specially large-scale data science,
that are not going to be comfortable to all communities, and I think it will help all
of us if we can do some of that impedance-matching up front. And I think that I have, like — what?
One minute left or am I done? [inaudible] John Wilbanks:
Okay, so — just the last piece of this, which is that we think a lot of people are going
to do these sorts of app.-based, longitudinal, observational studies because they’re easy,
right? We’re standing up, too, and we don’t have any funding. That’s how cheap it is
to roll this sort of stuff out and we’re going to be making all the source code, all
the web-services, all these things available to other people who would like to run such
kinds of studies, but the thumb that we’re going to put on the scale — the tacks we’re
going to put on the scale — is that no one’s going to be able to use the architecture if
they don’t return all the data to the participant on demand in an exportable format. All right,
so we’re making this sort of unitary demands on the cohort from a science perspective,
but the deal is, if you want that, you have to give complete access to the data back to
the participant and not just as a downloadable tar ball, right? What’s more important,
you have to be able to push the data to a third party, like PEER [spelled phonetically]
or some of the other platforms we’ve heard where you can manage your data as a participant
once it’s gotten outside of that clinical research context. I’ll stop there. [end of transcript]

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *