In early February the Worldwide Threat Assessment listed CRISPR as a dual-use technology, one that could be used for either good or bad (think nuclear power). At the end of that same month, a delegation from the intelligence community asked to meet with me.
Five years ago, I would never have dreamed of typing that previous sentence. Now it’s a day in the life.
I flew back from a Keystone meeting a day early (this one was my science vacation, just for fun) to meet with the group. I really didn’t know what to expect from the delegation. I had visions of people dressed in severe suits who wore dark glasses indoors and could read my email at the press of a button. Instead, I was treated to a lively, well-rounded group of scientists, ethicists, and economists. I could have imagined any one of them walking the halls of UC Berkeley as faculty. Though one’s business card was redacted in thick black sharpie (no joke).
We were joined by Berkeley’s Director of Federal Relations and had an outstanding discussion lasting a few hours. I learned a lot from the group about how government educates itself and came away very impressed with the people the U.S. government charges with looking in to emerging technologies.
But I do have a point to make in bringing up this unusual visit.
As scientists, I think we should work responsibly with our new gene editing capabilities and be honest about the potential dangers. The whole point of next-gen gene editing is that it’s fast, cheap, and easy. I have undergraduates volunteering in my lab who edit their first genes within a month of joining. That’s exciting, but can also be scary. I agree with the threat assessment that CRISPR could be used for bad things, since it’s just a tool. In fact, it might even be possible to accidentally use CRISPR in a bad way. Think AAV editing experiments designed for mice that accidentally also target human sequences.
But like I said, gene editing is just a tool. A hammer can build a house, but it can also hit someone on the head. Likewise, gaining one tool doesn’t make everything easy. Try building a house with only a hammer.
Just because we now have democratized gene editing doesn’t mean that bad things will start popping up left and right. Bacterial engineering has been around for a long time, but it’s still hard to do bad things in that arena. There are many other barriers and bottlenecks in the way, and the same is true for bad guys who might try gene editing.
So what should we do? As gene editors, I think we should closely and enthusiastically engage with appropriate agencies. This includes federal and state bodies, and even local groups like campus EH&S. We should also be instilling a culture of responsibility and safety in the lab, even above and beyond normal safety. It’s one thing for a postdoc to remember their PPE, but it’s another thing to think to ask, “Should I talk to someone before I do this experiment?” Security through obscurity is not the way, but sometimes it really is better to first talk things through in a very wide forum. Remember the outcry about the H5N1 flu papers…
The idea is not to scare people. The technology isn’t scary, and gene editing really isn’t new. It’s just easier and cheaper now, which changes the equation a bit. We should be open about risks and proactive about managing them, otherwise they’ll be managed for us.
Apologies for the long delay between posts. I was teaching this last semester and also trying to get three papers out the door.