The 7 biggest problems facing science, according to 270 scientists

http://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process

 

“Science, I had come to learn, is as political, competitive, and fierce a career as you can find, full of the temptation to find easy paths.” — Paul Kalanithi, neurosurgeon and writer (1977–2015)

Science is in big trouble. Or so we’re told.

In the past several years, many scientists have become afflicted with a serious case of doubt — doubt in the very institution of science.

Explore the biggest challenges facing science, and how we can fix them:

  1. Academia has a huge money problem
  2. Too many studies are poorly designed
  3. Replicating results is crucial — and rare
  4. Peer review is broken
  5. Too much science is locked behind paywalls
  6. Science is poorly communicated
  7. Life as a young academic is incredibly stressful

Conclusion:

  • Science is not doomed

As reporters covering medicine, psychology, climate change, and other areas of research, we wanted to understand this epidemic of doubt. So we sent scientists a survey asking this simple question: If you could change one thing about how science works today, what would it be and why?

We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, andFields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.

The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat. Science is rarely practiced to that ideal. But Copernicus believed in that ideal. So did the rocket scientists behind the moon landing.

But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.

“I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter,” says Kathryn Bradshaw, a 27-year-old graduate student of counseling at the University of North Dakota.

Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public.

Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase “publish or perish” hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side.

“Over time the most successful people will be those who can best exploit the system,” Paul Smaldino, a cognitive science professor at University of California Merced, says.

To Smaldino, the selection pressures in science have favored less-than-ideal research: “As long as things like publication quantity, and publishing flashy results in fancy journals are incentivized, and people who can do that are rewarded … they’ll be successful, and pass on their successful methods to others.”

Many scientists have had enough. They want to break this cycle of perverse incentives and rewards. They are going through a period of introspection, hopeful that the end result will yield stronger scientific institutions. In our survey and interviews, they offered a wide variety of ideas for improving the scientific process and bringing it closer to its ideal form.

Before we jump in, some caveats to keep in mind: Our survey was not a scientific poll. For one, the respondents disproportionately hailed from the biomedical and social sciences and English-speaking communities.

Many of the responses did, however, vividly illustrate the challenges and perverse incentives that scientists across fields face. And they are a valuable starting point for a deeper look at dysfunction in science today.

The place to begin is right where the perverse incentives first start to creep in: the money.

 Annette Elizabeth Allen

(1)

Academia has a huge money problem

To do most any kind of research, scientists need money: to run studies, to subsidize lab equipment, to pay their assistants and even their own salaries. Our respondents told us that getting — and sustaining — that funding is a perennial obstacle.

Their gripe isn’t just with the quantity, which, in many fields, is shrinking. It’s the way money is handed out that puts pressure on labs to publish a lot of papers, breeds conflicts of interest, and encourages scientists to overhype their work.

In the United States, academic researchers in the sciences generally cannot rely on university funding alone to pay for their salaries, assistants, and lab costs. Instead, they have to seek outside grants. “In many cases the expectations were and often still are that faculty should cover at least 75 percent of the salary on grants,” writes John Chatham, a professor of medicine studying cardiovascular disease at University of Alabama at Birmingham.

Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley, a neurobiology postdoc at the University of Bristol, points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.

Outside grants are also in increasingly short supply. In the US, the largest source of funding is the federal government, and that pool of money has been plateauing for years, while young scientists enter the workforce at a faster rate than older scientists retire.

Take the National Institutes of Health, a major funding source. Its budget rose at a fast clip through the 1990s, stalled in the 2000s, and then dipped with sequestration budget cuts in 2013. All the while, rising costs for conducting science meant that each NIH dollar purchased less and less. Last year, Congress approved the biggest NIH spending hike in a decade. But it won’t erase the shortfall.

The consequences are striking: In 2000, more than 30 percent of NIH grant applications got approved. Today, it’s closer to 17 percent. “It’s because of what’s happened in the last 12 years that young scientists in particular are feeling such a squeeze,” NIH Director Francis Collins said at the Milken Global Conference in May.

Some of our respondents said that this vicious competition for funds can influence their work. Funding “affects what we study, what we publish, the risks we (frequently don’t) take,” explains Gary Bennett a neuroscientist at Duke University. It “nudges us to emphasize safe, predictable (read: fundable) science.”

Truly novel research takes longer to produce, and it doesn’t always pay off. A National Bureau of Economic Research working paper found that, on the whole, truly unconventional papers tend to be less consistently cited in the literature. So scientists and funders increasingly shy away from them, preferring short-turnaround, safer papers. But everyone suffers from that: the NBER report found that novel papers also occasionally lead to big hits that inspire high-impact, follow-up studies.

“I think because you have to publish to keep your job and keep funding agencies happy, there are a lot of (mediocre) scientific papers out there … with not much new science presented,” writes Kaitlyn Suski, a chemistry and atmospheric science postdoc at Colorado State University.

Another worry: When independent, government, or university funding sources dry up, scientists may feel compelled to turn to industry or interest groups eager to generate studies to support their agendas.

Finally, all of this grant writing is a huge time suck, taking resources away from the actual scientific work. Tyler Josephson, an engineering graduate student at the University of Delaware, writes that many professors he knows spend 50 percent of their time writing grant proposals. “Imagine,” he asks, “what they could do with more time to devote to teaching and research?”

It’s easy to see how these problems in funding kick off a vicious cycle. To be more competitive for grants, scientists have to have published work. To have published work, they need positive (i.e., statistically significant) results. That puts pressure on scientists to pick “safe” topics that will yield a publishable conclusion — or, worse, may bias their research toward significant results.

“When funding and pay structures are stacked against academic scientists,” writes Alison Bernstein, a neuroscience postdoc at Emory University, “these problems are all exacerbated.”

Fixes for science’s funding woes

Right now there are arguably too many researchers chasing too few grants. Or, as a 2014piece in the Proceedings of the National Academy of Sciences put it: “The current system is in perpetual disequilibrium, because it will inevitably generate an ever-increasing supply of scientists vying for a finite set of research resources and employment opportunities.”

“As it stands, too much of the research funding is going to too few of the researchers,” writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. “This creates a culture that rewards fast, sexy (and probably wrong) results.”

One straightforward way to ameliorate these problems would be for governments to simply increase the amount of money available for science. (Or, more controversially, decrease the number of PhDs, but we’ll get to that later.) If Congress boosted funding for the NIH and National Science Foundation, that would take some of the competitive pressure off researchers.

But that only goes so far. Funding will always be finite, and researchers will never get blank checks to fund the risky science projects of their dreams. So other reforms will also prove necessary.

One suggestion: Bring more stability and predictability into the funding process. “The NIH and NSF budgets are subject to changing congressional whims that make it impossible for agencies (and researchers) to make long term plans and commitments,” M. Paul Murphy, a neurobiology professor at the University of Kentucky, writes. “The obvious solution is to simply make [scientific funding] a stable program, with an annual rate of increase tied in some manner to inflation.”

Another idea would be to change how grants are awarded: Foundations and agencies could fund specific people and labs for a period of time rather than individual project proposals. (The Howard Hughes Medical Institute already does this.) A system like this would give scientists greater freedom to take risks with their work.

Alternatively, researchers in the journal mBio recently called for a lottery-style system. Proposals would be measured on their merits, but then a computer would randomly choose which get funded.

“Although we recognize that some scientists will cringe at the thought of allocating funds by lottery,” the authors of the mBio piece write, “the available evidence suggests that the system is already in essence a lottery without the benefits of being random.” Pure randomness would at least reduce some of the perverse incentives at play in jockeying for money.

There are also some ideas out there to minimize conflicts of interest from industry funding. Recently, in PLOS Medicine, Stanford epidemiologist John Ioannidis suggested that pharmaceutical companies ought to pool the money they use to fund drug research, to be allocated to scientists who then have no exchange with industry during study design and execution. This way, scientists could still get funding for work crucial for drug approvals — but without the pressures that can skew results.

These solutions are by no means complete, and they may not make sense for every scientific discipline. The daily incentives facing biomedical scientists to bring new drugs to market are different from the incentives facing geologists trying to map out new rock layers. But based on our survey, funding appears to be at the root of many of the problems facing scientists, and it’s one that deserves more careful discussion.

 Annette Elizabeth Allen

(2)

Too many studies are poorly designed. Blame bad incentives.

Scientists are ultimately judged by the research they publish. And the pressure to publish pushes scientists to come up with splashy results, of the sort that get them into prestigious journals. “Exciting, novel results are more publishable than other kinds,” saysBrian Nosek, who co-founded the Center for Open Science at the University of Virginia.

The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more “revolutionary.” (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)

Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others. (Read more on study design particulars here.)

Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.

“I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,” writes Jess Kautz, a PhD student at the University of Arizona. “And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work.”

Increasingly, meta-researchers (who conduct research on research) are realizing that scientists often do find little ways to hype up their own results — and they’re not always doing it consciously. Among the most famous examples is a technique called “p-hacking,” in which researchers test their data against many hypotheses and only report those that have statistically significant results.

In a recent study, which tracked the misuse of p-values in biomedical journals, meta-researchers found “an epidemic” of statistical significance: 96 percent of the papers that included a p-value in their abstracts boasted statistically significant results.

That seems awfully suspicious. It suggests the biomedical community has been chasing statistical significance, potentially giving dubious results the appearance of validity through techniques like p-hacking — or simply suppressing important results that don’t look significant enough. Fewer studies share effect sizes (which arguably gives a better indication of how meaningful a result might be) or discuss measures of uncertainty.

“The current system has done too much to reward results,” says Joseph Hilgard, a postdoctoral research fellow at the Annenberg Public Policy Center. “This causes a conflict of interest: The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”

The consequences are staggering. An estimated $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies, according to meta-researchers who have analyzed inefficiencies in research. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated.

Fixes for poor study design

Our respondents suggested that the two key ways to encourage stronger study design — and discourage positive results chasing — would involve rethinking the rewards system and building more transparency into the research process.

“I would make rewards based on the rigor of the research methods, rather than the outcome of the research,” writes Simine Vazire, a journal editor and a social psychology professor at UC Davis. “Grants, publications, jobs, awards, and even media coverage should be based more on how good the study design and methods were, rather than whether the result was significant or surprising.”

Likewise, Cambridge mathematician Tim Gowers argues that researchers should get recognition for advancing science broadly through informal idea sharing — rather than only getting credit for what they publish.

“We’ve gotten used to working away in private and then producing a sort of polished document in the form of a journal article,” Gowers said. “This tends to hide a lot of the thought process that went into making the discoveries. I’d like attitudes to change so people focus less on the race to be first to prove a particular theorem, or in science to make a particular discovery, and more on other ways of contributing to the furthering of the subject.”

When it comes to published results, meanwhile, many of our respondents wanted to see more journals put a greater emphasis on rigorous methods and processes rather than splashy results.

“I think the one thing that would have the biggest impact is removing publication bias: judging papers by the quality of questions, quality of method, and soundness of analyses, but not on the results themselves,” writes Michael Inzlicht, a University of Toronto psychology and neuroscience professor.

Some journals are already embracing this sort of research. PLOS One, for example, makes a point of accepting negative studies (in which a scientist conducts a careful experiment and finds nothing) for publication, as does the aptly named Journal of Negative Results in Biomedicine.

More transparency would also help, writes Daniel Simons, a professor of psychology at the University of Illinois. Here’s one example: ClinicalTrials.gov, a site run by the NIH, allows researchers to register their study design and methods ahead of time and then publicly record their progress. That makes it more difficult for scientists to hide experiments that didn’t produce the results they wanted. (The site now holds information for more than 180,000 studies in 180 countries.)

Similarly, the AllTrials campaign is pushing for every clinical trial (past, present, and future) around the world to be registered, with the full methods and results reported. Some drug companies and universities have created portals that allow researchers to access raw data from their trials.

The key is for this sort of transparency to become the norm rather than a laudable outlier.


(3)

Replicating results is crucial. But scientists rarely do it.

Replication is another foundational concept in science. Researchers take an older study that they want to test and then try to reproduce it to see if the findings hold up.

Testing, validating, retesting — it’s all part of a slow and grinding process to arrive at some semblance of scientific truth. But this doesn’t happen as often as it should, our respondents said. Scientists face few incentives to engage in the slog of replication. And even when they attempt to replicate a study, they often find they can’t do so. Increasingly it’s being called a “crisis of irreproducibility.”

The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all.

More recently, a landmark study published in the journal Science demonstrated that only a fraction of recent findings in top psychology journals could be replicated. This is happening in other fields too, says Ivan Oransky, one of the founders of the blogRetraction Watch, which tracks scientific retractions.

As for the underlying causes, our survey respondents pointed to a couple of problems. First, scientists have very few incentives to even try replication. Jon-Patrick Allem, a social scientist at the Keck School of Medicine of USC, noted that funding agencies prefer to support projects that find new information instead of confirming old results.

Journals are also reluctant to publish replication studies unless “they contradict earlier findings or conclusions,” Allem writes. The result is to discourage scientists from checking each other’s work. “Novel information trumps stronger evidence, which sets the parameters for working scientists.”

The second problem is that many studies can be difficult to replicate. Sometimes their methods are too opaque. Sometimes the original studies had too few participants to produce a replicable answer. And sometimes, as we saw in the previous section, the study is simply poorly designed or outright wrong.

Again, this goes back to incentives: When researchers have to publish frequently and chase positive results, there’s less time to conduct high-quality studies with well-articulated methods.

Fixes for underreplication

Scientists need more carrots to entice them to pursue replication in the first place. As it stands, researchers are encouraged to publish new and positive results and to allow negative results to linger in their laptops or file drawers.

This has plagued science with a problem called “publication bias” — not all studies that are conducted actually get published in journals, and the ones that do tend to have positive and dramatic conclusions.

If institutions started to reward tenure positions or make hires based on the quality of a researcher’s body of work, instead of quantity, this might encourage more replication and discourage positive results chasing.

“The key that needs to change is performance review,” writes Christopher Wynder, a former assistant professor at McMaster University. “It affects reproducibility because there is little value in confirming another lab’s results and trying to publish the findings.”

The next step would be to make replication of studies easier. This could include more robust sharing of methods in published research papers. “It would be great to have stronger norms about being more detailed with the methods,” says University of Virginia’s Brian Nosek.

He also suggested more regularly adding supplements at the end of papers that get into the procedural nitty-gritty, to help anyone wanting to repeat an experiment. “If I can rapidly get up to speed, I have a much better chance of approximating the results,” he said.

Nosek has detailed other potential fixes that might help with replication — all part of his work at the Center for Open Science.

A greater degree of transparency and data sharing would enable replications, said Stanford’s John Ioannidis. Too often, anyone trying to replicate a study must chase down the original investigators for details about how the experiment was conducted.

“It is better to do this in an organized fashion with buy-in from all leading investigators in a scientific discipline,” he explained, “rather than have to try to find the investigator in each case and ask him or her in detective-work fashion about details, data, and methods that are otherwise unavailable.”

Researchers could also make use of new tools, such as open source software that tracks every version of a data set, so that they can share their data more easily and have transparency built into their workflow.

Some of our respondents suggested that scientists engage in replication prior topublication. “Before you put an exploratory idea out in the literature and have people take the time to read it, you owe it to the field to try to replicate your own findings,” says John Sakaluk, a social psychologist at the University of Victoria.

For example, he has argued, psychologists could conduct small experiments with a handful of participants to form ideas and generate hypotheses. But they would then need to conduct bigger experiments, with more participants, to replicate and confirm those hypotheses before releasing them into the world. “In doing so,” Sakaluk says, “the rest of us can have more confidence that this is something we might want to [incorporate] into our own research.”

 Annette Elizabeth Allen

(4)

Peer review is broken

Peer review is meant to weed out junk science before it reaches publication. Yet over and over again in our survey, respondents told us this process fails. It was one of the parts of the scientific machinery to elicit the most rage among the researchers we heard from.

Normally, peer review works like this: A researcher submits an article for publication in a journal. If the journal accepts the article for review, it’s sent off to peers in the same field for constructive criticism and eventual publication — or rejection. (The level of anonymity varies; some journals have double-blind reviews, while others have moved to triple-blind review, where the authors, editors, and reviewers don’t know who one another are.)

It sounds like a reasonable system. But numerous studies and systematic reviews haveshown that peer review doesn’t reliably prevent poor-quality science from being published.

The process frequently fails to detect fraud or other problems with manuscripts, which isn’t all that surprising when you consider researchers aren’t paid or otherwise rewarded for the time they spend reviewing manuscripts. They do it out of a sense of duty — to contribute to their area of research and help advance science.

But this means it’s not always easy to find the best people to peer-review manuscripts in their field, that harried researchers delay doing the work (leading to publication delays of up to two years), and that when they finally do sit down to peer-review an article they might be rushed and miss errors in studies.

“The issue is that most referees simply don’t review papers carefully enough, which results in the publishing of incorrect papers, papers with gaps, and simply unreadable papers,” says Joel Fish, an assistant professor of mathematics at the University of Massachusetts Boston. “This ends up being a large problem for younger researchers to enter the field, since that means they have to ask around to figure out which papers are solid and which are not.”

That’s not to mention the problem of peer review bullying. Since the default in the process is that editors and peer reviewers know who the authors are (but authors don’t know who the reviews are), biases against researchers or institutions can creep in, opening the opportunity for rude, rushed, and otherwise unhelpful comments. (Just check out the popular #SixWordPeerReview hashtag on Twitter).

These issues were not lost on our survey respondents, who said peer review amounts to a broken system, which punishes scientists and diminishes the quality of publications. They want to not only overhaul the peer review process but also change how it’s conceptualized.

Fixes for peer review

On the question of editorial bias and transparency, our respondents were surprisingly divided. Several suggested that all journals should move toward double-blinded peer review, whereby reviewers can’t see the names or affiliations of the person they’re reviewing and publication authors don’t know who reviewed them. The main goal here was to reduce bias.

“We know that scientists make biased decisions based on unconscious stereotyping,” writes Pacific Northwest National University postdoc Timothy Duignan. “So rather than judging a paper by the gender, ethnicity, country, or institutional status of an author — which I believe happens a lot at the moment — it should be judged by its quality independent of those things.”

Yet others thought that more transparency, rather than less, was the answer: “While we correctly advocate for the highest level of transparency in publishing, we still have most reviews that are blinded, and I cannot know who is reviewing me,” writes Lamberto Manzoli, a professor of epidemiology and public health at the University of Chieti, in Italy. “Too many times we see very low quality reviews, and we cannot understand whether it is a problem of scarce knowledge or conflict of interest.”

Perhaps there is a middle ground. For example, eLife, a new open access journal that is rapidly rising in impact factor, runs a collaborative peer review process. Editors and peer reviewers work together on each submission to create a consolidated list of comments about a paper. The author can then reply to what the group saw as the most important issues, rather than facing the biases and whims of individual reviewers. (Oddly, this process is faster — eLife takes less time to accept papers than Nature or Cell.)

Still, those are mostly incremental fixes. Other respondents argued that we might need to radically rethink the entire process of peer review from the ground up.

“The current peer review process embraces a concept that a paper is final,” says Nosek. “The review process is [a form of] certification, and that a paper is done.” But science doesn’t work that way. Science is an evolving process, and truth is provisional. So, Nosek said, science must “move away from the embrace of definitiveness of publication.”

Some respondents wanted to think of peer review as more of a continuous process, in which studies are repeatedly and transparently updated and republished as new feedback changes them — much like Wikipedia entries. This would require some sort of expert crowdsourcing.

“The scientific publishing field — particularly in the biological sciences — acts like there is no internet,” says Lakshmi Jayashankar, a senior scientific reviewer with the federal government. “The paper peer review takes forever, and this hurts the scientists who are trying to put their results quickly into the public domain.”

One possible model already exists in mathematics and physics, where there is a long tradition of “pre-printing” articles. Studies are posted on an open website called arXiv.org, often before being peer-reviewed and published in journals. There, the articles are sorted and commented on by a community of moderators, providing another chance to filter problems before they make it to peer review.

“Posting preprints would allow scientific crowdsourcing to increase the number of errors that are caught, since traditional peer-reviewers cannot be expected to be experts in every sub-discipline,” writes Scott Hartman, a paleobiology PhD student at the University of Wisconsin.

And even after an article is published, researchers think the peer review process shouldn’t stop. They want to see more “post-publication” peer review on the web, so that academics can critique and comment on articles after they’ve been published. Sites like PubPeer and F1000Research have already popped up to facilitate that kind of post-publication feedback.

“We do this a couple of times a year at conferences,” writes Becky Clarkson, a geriatric medicine researcher at the University of Pittsburgh. “We could do this every day on the internet.”

The bottom line is that traditional peer review has never worked as well as we imagine it to — and it’s ripe for serious disruption.

 Annette Elizabeth Allen

(5)

Too much science is locked behind paywalls

After a study has been funded, conducted, and peer-reviewed, there’s still the question of getting it out so that others can read and understand its results.

Over and over, our respondents expressed dissatisfaction with how scientific research gets disseminated. Too much is locked away in paywalled journals, difficult and costly to access, they said. Some respondents also criticized the publication process itself for being too slow, bogging down the pace of research.

On the access question, a number of scientists argued that academic research should be free for all to read. They chafed against the current model, in which for-profit publishers put journals behind pricey paywalls.

A single article in Science will set you back $30; a year-long subscription to Cell will cost $279. Elsevier publishes 2,000 journals that can cost up to $10,000 or $20,000 a yearfor a subscription.

Many US institutions pay those journal fees for their employees, but not all scientists (or other curious readers) are so lucky. In a recent issue of Science, journalist John Bohannon described the plight of a PhD candidate at a top university in Iran. He calculated that the student would have to spend $1,000 a week just to read the papers he needed.

As Michael Eisen, a biologist at UC Berkeley and co-founder of the Public Library of Science (or PLOS), put it, scientific journals are trying to hold on to the profits of the print era in the age of the internet. Subscription prices have continued to climb, as a handful of big publishers (like Elsevier) have bought up more and more journals, creating mini knowledge fiefdoms.

“Large, publicly owned publishing companies make huge profits off of scientists by publishing our science and then selling it back to the university libraries at a massive profit (which primarily benefits stockholders),” Corina Logan, an animal behavior researcher at the University of Cambridge, noted. “It is not in the best interest of the society, the scientists, the public, or the research.” (In 2014, Elsevier reported a profit margin of nearly 40 percent and revenues close to $3 billion.)

“It seems wrong to me that taxpayers pay for research at government labs and universities but do not usually have access to the results of these studies, since they are behind paywalls of peer-reviewed journals,” added Melinda Simon, a postdoc microfluidics researcher at Lawrence Livermore National Lab.

Fixes for closed science

Many of our respondents urged their peers to publish in open access journals (along the lines of PeerJ or PLOS Biology). But there’s an inherent tension here. Career advancement can often depend on publishing in the most prestigious journals, likeScience or Nature, which still have paywalls.

There’s also the question of how best to finance a wholesale transition to open access.After all, journals can never be entirely free. Someone has to pay for the editorial staff, maintaining the website, and so on. Right now, open access journals typically charge fees to those submitting papers, putting the burden on scientists who are already struggling for funding.

One radical step would be to abolish for-profit publishers altogether and move toward a nonprofit model. “For journals I could imagine that scientific associations run those themselves,” suggested Johannes Breuer, a postdoctoral researcher in media psychology at the University of Cologne. “If they go for online only, the costs for web hosting, copy-editing, and advertising (if needed) can be easily paid out of membership fees.”

As a model, Cambridge’s Tim Gowers has launched an online mathematics journal calledDiscrete Analysis. The nonprofit venture is owned and published by a team of scholars, it has no publisher middlemen, and access will be completely free for all.

Until wholesale reform happens, however, many scientists are going a much simpler route: illegally pirating papers.

Bohannon reported that millions of researchers around the world now use Sci-Hub, a site set up by Alexandra Elbakyan, a Russia-based neuroscientist, that illegally hosts more than 50 million academic papers. “As a devout pirate,” Elbakyan told us, “I think that copyright should be abolished.”

One respondent had an even more radical suggestion: that we abolish the existing peer-reviewed journal system altogether and simply publish everything online as soon as it’s done.

“Research should be made available online immediately, and be judged by peers online rather than having to go through the whole formatting, submitting, reviewing, rewriting, reformatting, resubmitting, etc etc etc that can takes years,” writes Bruno Dagnino, formerly of the Netherlands Institute for Neuroscience. “One format, one platform. Judge by the whole community, with no delays.”

A few scientists have been taking steps in this direction. Rachel Harding, a genetic researcher at the University of Toronto, has set up a website called Lab Scribbles, where she publishes her lab notes on the structure of huntingtin proteins in real time, posting data as well as summaries of her breakthroughs and failures. The idea is to help share information with other researchers working on similar issues, so that labs can avoid needless overlap and learn from each other’s mistakes.

Not everyone might agree with approaches this radical; critics worry that too much sharing might encourage scientific free riding. Still, the common theme in our survey was transparency. Science is currently too opaque, research too difficult to share. That needs to change.


(6)

Science is poorly communicated to the public

“If I could change one thing about science, I would change the way it is communicated to the public by scientists, by journalists, and by celebrities,” writes Clare Malone, a postdoctoral researcher in a cancer genetics lab at Brigham and Women’s Hospital.

She wasn’t alone. Quite a few respondents in our survey expressed frustration at how science gets relayed to the public. They were distressed by the fact that so many laypeople hold on to completely unscientific ideas or have a crude view of how science works.

They griped that misinformed celebrities like Gwyneth Paltrow have an outsize influence over public perceptions about health and nutrition. (As the University of Alberta’s Timothy Caulfield once told us, “It’s incredible how much she is wrong about.”)

fixing science 3

They have a point. Science journalism is often full of exaggerated, conflicting, or outright misleading claims. If you ever want to see a perfect example of this, check out “Kill or Cure,” a site where Paul Battley meticulously documents all the times the Daily Mail reported that various items — from antacids to yogurt — either cause cancer, prevent cancer, or sometimes do both.

Sometimes bad stories are peddled by university press shops. In 2015, the University of Maryland issued a press release claiming that a single brand of chocolate milk could improve concussion recovery. It was an absurd case of science hype.

Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

But not everyone blamed the media and publicists alone. Other respondents pointed out that scientists themselves often oversell their work, even if it’s preliminary, because funding is competitive and everyone wants to portray their work as big and important and game-changing.

“You have this toxic dynamic where journalists and scientists enable each other in a way that massively inflates the certainty and generality of how scientific findings are communicated and the promises that are made to the public,” writes Daniel Molden, an associate professor of psychology at Northwestern University. “When these findings prove to be less certain and the promises are not realized, this just further erodes the respect that scientists get and further fuels scientists desire for appreciation.”

Fixes for better science communication

Opinions differed on how to improve this sorry state of affairs — some pointed to the media, some to press offices, others to scientists themselves.

Plenty of our respondents wished that more science journalists would move away from hyping single studies. Instead, they said, reporters ought to put new research findings in context, and pay more attention to the rigor of a study’s methodology than to the splashiness of the end results.

“On a given subject, there are often dozens of studies that examine the issue,” writes Brian Stacy of the US Department of Agriculture. “It is very rare for a single study to conclusively resolve an important research question, but many times the results of a study are reported as if they do.”

But it’s not just reporters who will need to shape up. The “toxic dynamic” of journalists,academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step.

Some suggested the creation of credible referees that could rigorously distill the strengths and weaknesses of research. (Some variations of this are starting to pop up: The Genetic Expert News Service solicits outside experts to weigh in on big new studies in genetics and biotechnology.) Other respondents suggested that making research free to all might help tamp down media misrepresentations.

Still other respondents noted that scientists themselves should spend more time learning how to communicate with the public — a skill that tends to be under-rewarded in the current system.

“Being able to explain your work to a non-scientific audience is just as important as publishing in a peer-reviewed journal, in my opinion, but currently the incentive structure has no place for engaging the public,” writes Crystal Steltenpohl, a graduate assistant at DePaul University.

Reducing the perverse incentives around scientific research itself could also help reduce overhype. “If we reward research based on how noteworthy the results are, this will create pressure to exaggerate the results (through exploiting flexibility in data analysis, misrepresenting results, or outright fraud),” writes UC Davis’s Simine Vazire. “We should reward research based on how rigorous the methods and design are.”

Or perhaps we should focus on improving science literacy. Jeremy Johnson, a project coordinator at the Broad Institute, argued that bolstering science education could help ameliorate a lot of these problems. “Science literacy should be a top priority for our educational policy,” he said, “not an elective.”


(7)

Life as a young academic is incredibly stressful

When we asked researchers what they’d fix about science, many talked about the scientific process itself, about study design or peer review. These responses often came from tenured scientists who loved their jobs but wanted to make the broader scientific project even better.

But on the flip side, we heard from a number of researchers — many of them graduate students or postdocs — who were genuinely passionate about research but found the day-to-day experience of being a scientist grueling and unrewarding. Their comments deserve a section of their own.

Today, many tenured scientists and research labs depend on small armies of graduate students and postdoctoral researchers to perform their experiments and conduct data analysis.

These grad students and postdocs are often the primary authors on many studies. In a number of fields, such as the biomedical sciences, a postdoc position is a prerequisite before a researcher can get a faculty-level position at a university.

This entire system sits at the heart of modern-day science. (A new card game called Lab Wars pokes fun at these dynamics.)

But these low-level research jobs can be a grind. Postdocs typically work long hours and are relatively low-paid for their level of education — salaries are frequently pegged tostipends set by NIH National Research Service Award grants, which start at $43,692 and rise to $47,268 in year three.

Postdocs tend to be hired on for one to three years at a time, and in many institutions they are considered contractors, limiting their workplace protections. We heard repeatedly about extremely long hours and limited family leave benefits.

“Oftentimes this is problematic for individuals in their late 20s and early to mid-30s who have PhDs and who may be starting families while also balancing a demanding job that pays poorly,” wrote one postdoc, who asked for anonymity.

This lack of flexibility tends to disproportionately affect women — especially women planning to have families — which helps contribute to gender inequalities in research. (A 2012 paper found that female job applicants in academia are judged more harshly and are offered less money than males.) “There is very little support for female scientists and early-career scientists,” noted another postdoc.

“There is very little long-term financial security in today’s climate, very little assurance where the next paycheck will come from,” wrote William Kenkel, a postdoctoral researcher in neuroendocrinology at Indiana University. “Since receiving my PhD in 2012, I left Chicago and moved to Boston for a post-doc, then in 2015 I left Boston for a second post-doc in Indiana. In a year or two, I will move again for a faculty job, and that’s if I’m lucky. Imagine trying to build a life like that.”

This strain can also adversely affect the research that young scientists do. “Contracts are too short term,” noted another researcher. “It discourages rigorous research as it is difficult to obtain enough results for a paper (and hence progress) in two to three years. The constant stress drives otherwise talented and intelligent people out of science also.”

Because universities produce so many PhDs but have way fewer faculty jobs available, many of these postdoc researchers have limited career prospects. Some of them end up staying stuck in postdoc positions for five or 10 years or more.

“In the biomedical sciences,” wrote the first postdoc quoted above, “each available faculty position receives applications from hundreds or thousands of applicants, putting immense pressure on postdocs to publish frequently and in high impact journals to be competitive enough to attain those positions.”

Many young researchers pointed out that PhD programs do fairly little to train people for careers outside of academia. “Too many [PhD] students are graduating for a limited number of professor positions with minimal training for careers outside of academic research,” noted Don Gibson, a PhD candidate studying plant genetics at UC Davis.

Laura Weingartner, a graduate researcher in evolutionary ecology at Indiana University, agreed: “Few universities (specifically the faculty advisors) know how to train students for anything other than academia, which leaves many students hopeless when, inevitably, there are no jobs in academia for them.”

Add it up and it’s not surprising that we heard plenty of comments about anxiety and depression among both graduate students and postdocs. “There is a high level of depression among PhD students,” writes Gibson. “Long hours, limited career prospects, and low wages contribute to this emotion.”

A 2015 study at the University of California Berkeley found that 47 percent of PhD students surveyed could be considered depressed. The reasons for this are complex and can’t be solved overnight. Pursuing academic research is already an arduous, anxiety-ridden task that’s bound to take a toll on mental health.

But as Jennifer Walker explored recently at Quartz, many PhD students also feel isolated and unsupported, exacerbating those issues.

Fixes to keep young scientists in science

We heard plenty of concrete suggestions. Graduate schools could offer more generous family leave policies and child care for graduate students. They could also increase the number of female applicants they accept in order to balance out the gender disparity.

But some respondents also noted that workplace issues for grad students and postdocs were inseparable from some of the fundamental issues facing science that we discussed earlier. The fact that university faculty and research labs face immense pressure to publish — but have limited funding — makes it highly attractive to rely on low-paid postdocs.

“There is little incentive for universities to create jobs for their graduates or to cap the number of PhDs that are produced,” writes Weingartner. “Young researchers are highly trained but relatively inexpensive sources of labor for faculty.”

Some respondents also pointed to the mismatch between the number of PhDs produced each year and the number of academic jobs available.

A recent feature by Julie Gould in Nature explored a number of ideas for revamping the PhD system. One idea is to split the PhD into two programs: one for vocational careers and one for academic careers. The former would better train and equip graduates to find jobs outside academia.

This is hardly an exhaustive list. The core point underlying all these suggestions, however, was that universities and research labs need to do a better job of supporting the next generation of researchers. Indeed, that’s arguably just as important as addressing problems with the scientific process itself. Young scientists, after all, are by definition the future of science.

Weingartner concluded with a sentiment we saw all too frequently: “Many creative, hard-working, and/or underrepresented scientists are edged out of science because of these issues. Not every student or university will have all of these unfortunate experiences, but they’re pretty common. There are a lot of young, disillusioned scientists out there now who are expecting to leave research.”

Science needs to correct its greatest weaknesses

Science is not doomed.

For better or worse, it still works. Look no further than the novel vaccines to prevent Ebola, the discovery of gravitational waves, or new treatments for stubborn diseases. And it’s getting better in many ways. See the work of meta-researchers who study and evaluate research — a field that has gained prominence over the past 20 years.

More from this feature

We asked hundreds of scientists what they’d change about science. Here are 33 of our favorite responses.

But science is conducted by fallible humans, and it hasn’t been human-proofed to protect against all our foibles. The scientific revolution began just 500 years ago. Only over the past 100 has science become professionalized. There is still room to figure out how best to remove biases and align incentives.

To that end, here are some broad suggestions:

One: Science has to acknowledge and address its money problem. Science is enormously valuable and deserves ample funding. But the way incentives are set up can distort research.

Right now, small studies with bold results that can be quickly turned around and published in journals are disproportionately rewarded. By contrast, there are fewer incentives to conduct research that tackles important questions with robustly designed studies over long periods of time. Solving this won’t be easy, but it is at the root of many of the issues discussed above.

Two: Science needs to celebrate and reward failure. Accepting that we can learn more from dead ends in research and studies that failed would alleviate the “publish or perish” cycle. It would make scientists more confident in designing robust tests and not just convenient ones, in sharing their data and explaining their failed tests to peers, and in using those null results to form the basis of a career (instead of chasing those all-too-rare breakthroughs).

Three: Science has to be more transparent. Scientists need to publish the methods and findings more fully, and share their raw data in ways that are easily accessible and digestible for those who may want to reanalyze or replicate their findings.

There will always be waste and mediocre research, but as Stanford’s Ioannidis explains in a recent paper, a lack of transparency creates excess waste and diminishes the usefulness of too much research.

Again and again, we also heard from researchers, particularly in social sciences, who felt that their cognitive biases in their own work, influenced by pressures to publish and advance their careers, caused science to go off the rails. If more human-proofing and de-biasing were built into the process — through stronger peer review, cleaner and more consistent funding, and more transparency and data sharing — some of these biases could be mitigated.

These fixes will take time, grinding along incrementally — much like the scientific process itself. But the gains humans have made so far using even imperfect scientific methods would have been unimaginable 500 years ago. The gains from improving the process could prove just as staggering, if not more so.

The world’s most valuable startup is leading the race to transform the future of transport

Uberworld

The world’s most valuable startup is leading the race to transform the future of transport

“LET’S Uber.” Few companies offer something so popular that their name becomes a verb. But that is one of the many achievements of Uber, a company founded in 2009 which is now the world’s most valuable startup, worth around $70 billion. Its app can summon a car in moments in more than 425 cities around the world, to the fury of taxi drivers everywhere. But Uber’s ambitions, and the expectations underpinning its valuation, extend much further: using self-driving vehicles, it wants to make ride-hailing so cheap and convenient that people forgo car ownership altogether. Not satisfied with shaking up the $100-billion-a-year taxi business, it has its eye on the far bigger market for personal transport, worth as much as $10 trillion a year globally.

Uber is not alone in this ambition. Companies big and small have recognised the transformative potential of electric, self-driving cars, summoned on demand. Technology firms including Apple, Google and Tesla are investing heavily in autonomous vehicles; from Ford to Volvo, incumbent carmakers are racing to catch up. An epic struggle looms. It will transform daily life as profoundly as cars did in the 20th century: reinventing transport and reshaping cities, while also dramatically reducing road deaths and pollution.

WHEELS OF CHANGE

In the short term Uber is in pole position to lead the revolution because of its dominance of chauffeured ride-hailing, a part of the transport market that will see some of the fastest growth. Today ride-hailing accounts for less than 4% of all kilometres driven globally, but that will rise to more than 25% by 2030, according to Morgan Stanley, a bank. The ability to summon a car using a smartphone does not just make it easy for individuals to book a cheaper taxi. Ride-sharing services like UberPool, which put travellers heading in the same direction into one vehicle, blur the boundaries between private and public transport. Helsinki and other cities have been experimenting with on-demand.

bus services and apps that enable customers to plan and book journeys combining trains and buses with walking and private ride-sharing services. Get it right, and public-transport networks will be extended to cover the “last mile” that takes people right to their doorsteps. This will extend the market for ride-hailing well beyond the wealthy urbanites who are its main users today.

But in the longer term autonomous vehicles will drive the reinvention of transport. The first examples have already hit the road. Google is testing autonomous cars on streets near its headquarters in Mountain View. A startup called nuTonomy recently launched a self-driving taxi service in Singapore. Tesla’s electric cars are packed full of driver-assistance technology. And within the next few weeks Uber itself will offer riders in Pittsburgh the chance to hail an autonomous car (though a human will be on hand to take back the wheel if needed).

Self-driving cars will reinforce trends unleashed by ride-hailing, making it cheaper and more accessible. The disabled, the old and the young will find it easier to go where they want. Many more people will opt out of car ownership altogether. An OECD study that modelled the use of self-driving cars in Lisbon found that shared autonomous vehicles could reduce the number of cars needed by 80-90%. As car ownership declines, the enormous amount of space devoted to parking—as much as a quarter of the area of some American cities—will be available for parks and housing instead.

It is not clear which companies will dominate this world or how profitable it will be. Uber will not win in its current form: a ride-hailing business which depends on human drivers cannot compete on roads full of self-driving cars. But this existential threat is spurring the firm’s innovation (see article). With its strong brand and large customer base, Uber aims to establish itself as the leading provider of transport services in a self-driving world. It is also branching out into new areas, such as food delivery and long-distance cargo haulage using autonomous trucks. There is logic in this ambition. Carmakers lack Uber’s experience as a service provider, or its deep knowledge of demand patterns and customer behaviour.

But firms that pioneer new technological trends do not always manage to stay on top. Think of Nokia and BlackBerry in smartphones, Kodak in digital cameras or MySpace in social networking. Much will depend on which firm best handles the regulators. Technology companies have a history of trying new things first and asking for permission later. Uber’s success in ride-hailing owes much to this recipe, yet when it comes to autonomous vehicles, the combination of vague rules and imperfect technology can have deadly consequences.

Even for the winners, it is not clear how great the rewards will be. As more firms pile into ride-sharing, and autonomous vehicles become part of the mix, the business may prove to be less lucrative than expected. By matching riders with drivers, Uber can offer transport services without owning a single vehicle, and keep the lion’s share of the profits. But if its service becomes an integral part of urban transport infrastructure, as it hopes, Uber could end up being regulated, more highly taxed, broken up or all of the above. In a self-driving world, Uber might also have to own and operate its own fleet, undermining its “asset-light” model. The would-be high-margin digital disrupter would then look more like a low-margin airline.

The great road race

For now Uber is the firm to beat in the race to transform the future of personal transport. Unlike Apple or Google, it is singularly focused on transport; unlike incumbent carmakers, it does not have a legacy car-manufacturing business to protect. Its recent rapprochement with Didi, its main rival in China, has removed a major distraction, allowing it to devote its $9 billion war chest to developing new technology. Its vision of the future is plausible and compelling. It could yet prove a Moses company, never reaching its promised land—it might end up like Hoover, lending its name to a new product category without actually dominating it. But whether Uber itself wins or loses, we are all on the road to Uberworld.

Join our GP Crash Course and Classes!

France’s debate over the burkini ban, explained

France’s debate over the burkini ban, explained

Updated by Tara Golshan on August 26, 2016

Kenneth Roth

Update: France’s top administrative court overturned the burkini ban Friday, according to the Associated Press.

This week, photos emerged of four armed male French police officers demanding that a woman wearing a headscarf, long sleeves, and pants remove layers of her clothing at a beach in Nice. She was given a ticket for not “wearing an outfit respecting good morals and secularism.”

In other words, she was wearing something resembling a burkini — a full-body swimsuit for women that is designed to be “in line with Islamic values” — that, as of this summer, was banned in 15 French towns, including Cannes and Nice. After much anger and protest worldwide, France’s top administrative court overturned the burkini ban on August 26, the Associated Press reported.

This wasn’t a unique incident; another woman in Cannes reported being fined for wearing leggings, tunic, and a headscarf on the beach. The “burkini ban” — which fined burkini wearers up to €38 ($42) — was adopted in response to a tragic terror attack in Nice in July, when a truck driver who had reportedly pledged allegiance to ISIS drove through a group of Bastille Day revelers watching fireworks on the boardwalk, killing 80.

French officials were championing the law as a protection of France’s secularity, a core tenet of the country’s constitution, as well as a defense against the “regressive” nature of the burka. Here’s Gérard Araud, French ambassador to the US, defending the ban:

But not everybody agreed. While French officials argued the burkini represents Islam’s inability to assimilate to France’s values, the burkini was actually invented to allow Muslim women to participate more in Western culture.

The burkini’s inventor, Lebanese-Australian designer Aheda Zanetti, told Politico that France’s ban is “just hatred” toward Muslims.

“I created [burkinis] to stop Muslim children from missing out on swimming lessons and sports activities,” she said. “I hope the French prime minister and the mayors see that they should find out how to combine communities, how to work around issues, instead of harming the community, taking the beach away from some people and punishing them. That’s just hatred.”

Nevertheless, before being overturned by France’s top administrative court, the ban was upheld in a court in Nice last week: “The wearing of distinctive clothing, other than that usually worn for swimming, can indeed only be interpreted in this context as a straightforward symbol of religiosity,” the court stated, arguing that such displays of religion are counter to the country’s secular code.

This was not the first time France has banned Islamic dress — headscarves are also banned in public schools — nor is it the first time France’s attitudes toward Muslim immigrants has prompted debate.

Rather, the burkini ban was the latest in a series of policies, or attempted policies, that have discriminated against Muslim immigrants in a manner that research suggests could have a negative effect on the fight against radicalization.

How France’s secularism leads to discriminatory policies

France’s approach to a separation of church and state is much more vigorous than that of other European countries or the United States.

France doesn’t just keep religion out of political policies; it believes that religion should be separate from the national identity all together. This concept, laïcité, which in the French constitution formally declares France as a secular republic, was developed during the French Revolution to weaken the Catholic Church’s influence on government.

This approach of secularity was relatively easy to enforce when France was a more homogeneously Christian country, but France today is much more ethnically and religiously diverse, and the modern interpretation of this belief has proven to disproportionally target Muslims and other minority religious communities in France.

Laïcité is what led the French government to ban religious symbols and clothing — including crosses, yarmulkes (the Jewish skullcap, also called a kippah), and Islamic headscarves — from public schools in 2004 and to prohibit face covering in public in 2011, and is what was being used to defend the burkini ban. But in reality, laws that ban all religious symbols seem to mostly target Muslims.

French Muslims already experience a disproportionate amount of discrimination compared with French Christians and Jews in other aspects of French life like the workforce, and since the Charlie Hebdo shooting last year, France has seen a significant uptick in threats of anti-Muslim violence.

Ultimately these policies and attitudes have cultivated an increasingly discriminatory culture against Muslims living in France that further ostracizes them from French communities.

French officials are right to be concerned with growing radicalization. French-speaking countries are the most likely to produce people who leave to fight for terrorist organizations in Iraq and Syria, according to research by the Brookings Institution’s William McCants and Chris Meserole.

But thinking that banning an article of clothing is going to help stop it is wrong. In fact, researchers assert the opposite, as my colleague Zack Beauchamp explains:

McCants and Meserole hypothesize that this culture of laïcité has alienated French and Belgian Muslims from national culture, making them more vulnerable to radicalization. They argue that French influence in former colonies such as Tunisia and Lebanon, which are Francophone, could lead similar dynamics to play out there, exacerbating religious-secular divides and thus strengthening extremist narratives.

The causes for radicalization are hard to define, but they usually come down to both societal and individual factors. As Vox’s Jennifer Williams explains, societal factors include “the presence of a large minority population that is socially, politically, and economically marginalized,” “a cultural or political hostility toward religion in general or Islam in particular,” and “treatment of certain groups as ‘suspect communities’ that are subjected to invasive and overbearing counterterrorism efforts.”

France is home to the largest Muslim population in Europe: an estimated 5 to 6 million (about 8 percent of the total population). Yet with policies like the burkini ban and systemic anti-Muslim discrimination among employers, this community also feels frequently targeted. This is not to say that there are not other factors at play in France (you can read about them here), but limiting a community’s ability to integrate in society — by, say, taking their family to the beach — is not the way to address problems of integration and alienation.

The burkini ban played into the “war on Muslims” narrative ISIS is trying to propagate

Targeting Muslims also has another effect: It plays into the ISIS narrative that the Western world is engaged in a “war on Muslims.”

“Al Qaeda and the so-called Islamic State thrive every time Western countries give them ammunition to say that the West is discriminating or stigmatizing Muslims,” Sara Silvestri, a religion and politics professor at City University London, told CNN. “The effect of these laws is that Muslims feel marginalized and in turn, the feeling of being unwelcome impacts their ability and willingness to integrate into society, can cause withdrawal and lead to engagement with radical groups.”

Much of ISIS’s propaganda is based on the idea that the group is a welcoming home to Muslims who have been pushed out and discriminated against in western society.

Rules that further alienate Muslim communities in turn make ISIS’s claims more convincing to those at risk of radicalization.

How Does Having more Self-Driving Cars makes Taxis Cheaper?

Uber’s CEO doesn’t think self-driving cars will cost jobs, and he might be right

Photo by Stephen Lovekin/Getty Images for OurTime.org

When Uber announced Thursday that it would begin offering rides in self-driving cars to customers in Pittsburgh, it caused a lot of consternation among people worried about job losses. After all, sharing economy companies like Uber are supposed to represent one of the economy’s big sources of job growth. If even Uber is automating its fleet, doesn’t that mean workers are doomed?

But in an interview with Business Insider, Uber CEO Travis Kalanick argues that Uber drivers shouldn’t worry. He expects to continue offering work to drivers for a long time:

If you’re talking about a city like San Francisco or the Bay Area generally, we have, like, 30,000 active drivers. We are going to go from 30,000 to, let’s say, hypothetically, a million cars, right? But when you go to a million cars, you’re still going to need a human-driven parallel, or hybrid. And the reason why is because there are just places that autonomous cars are just not going to be able to go or conditions they’re not going to be able to handle. And even though it is going to be a smaller percentage of the whole, I can imagine 50,000 to 100,000 drivers, human drivers, alongside a million-car network. So I don’t think the number of human drivers will go down anytime soon.

Obviously, Kalanick has an interest in putting a positive spin on this since he depends on Uber drivers to make the service operate today. But his argument isn’t crazy. Similar things have happened in other industries.

For example, when automated teller machines were developed, many people thought ATMs would put most bank tellers out of work. But that didn’t happen. ATMs made it cheaper to open a bank branch, allowing banks to open many more branches in the 1990s. As a result, teller employment has actually grown slightly over the last 40 years, asthis chart from economist James Bessen shows:

 James Bessen

The same logic could apply to the car market. If self-driving cars make taxi rides a lot cheaper, people will take a lot more taxi rides. And that could create more jobs even if the number of jobs per ride goes down. In the long run, there won’t be someone sitting in the driver’s seat, but there will be lots of other jobs supporting cars — things like maintaining, repairing, and cleaning the vehicles, handling customer service calls, keeping maps updated, and so forth.

Some jobs will be destroyed; others will be created. The net impact on the job market isn’t obvious.

Join our Econs Classes and Resources !

Economics Lessons from a Roomba smearing dog poop

A Roomba smeared dog poop all over this man’s house. There’s an economic lesson here.

Updated by Timothy B. Lee on August 17, 2016, 8:26 a.m. ET tim@vox.com

A robot vacuum cleaner sounds like a great idea. I have a Roomba, one of the most popular models, and most of the time it works great. But sometimes there are unexpected problems.

In a recent Facebook post, an Arkansas man described just how bad these problems can be. His dog had an accident on the floor, and then the Roomba started its scheduled cleaning.

“If your Roomba runs over dog poop, stop it immediately and do not let it continue the cleaning cycle,” the man wrote. Unfortunately, he happened to be asleep when the Roomba ran. The result: it “spread the dog poop over every conceivable surface within its reach, resulting in a home that closely resembles a Jackson Pollock poop painting.”

Silicon Valley optimists like venture capitalist Marc Andreessen have predicted that digital technology would revolutionize every facet of our lives. And of course that’s been true for industries like music, news, and maps. But other tasks have proven more resistant to digital transformation.

Earlier this year, I wrote about Nest, whose popular smart thermostat made it a poster child for smart homes. But the company, which was acquired by Google in 2014, has struggled to develop new products, raising questions about whether Google overpaid for the company.

A similar story can be told about iRobot, the company behind the Roomba robotic vacuum cleaner. The company is hardly a failure, having sold 15 million units since it was introduced in 2002. But the Roomba remains a niche product, and iRobot hasn’t come up with another hit.

These companies are struggling for similar reasons: Their products demand too much from their users while providing too little value in return.

Last year, iRobot sold 2.4 million Roombas. By any reasonable metric, that’s a successful product. But in a nation of 320 million people (not to mention a world with more than 7 billion people), it’s still a niche product. The vast majority of American households don’t have a Roomba or any other robot vacuum cleaner and seem to be in no hurry to buy one.

And if you talk to Roomba owners, it’s not hard to see why. “It gets stuck a lot,” my Vox colleague Sarah Kliff told me. “I can’t really leave it at home unsupervised.”

Sarah has a table with a curved metal bottom that her Roomba finds fiendishly difficult to navigate. Often she’ll come home to find that it drove up the table’s leg and got stranded, the cleaning job unfinished. The Roomba also terrifies Sarah’s dog.

My Roomba also has problems with getting stuck. But I’ve also found that it just doesn’t save me that much time. I still have to tidy up the room before letting the Roomba loose. Then when it’s done, I have to empty the dustbin and — often — dig out debris that got caught in the rollers. It’s not as much work as using an old-fashioned vacuum cleaner, but it’s not that much less work.

And then there are the times when the Roomba wreaks havoc. Asked about poop-related accidents, a spokesman for iRobot told the Guardian that “quite honestly, we see this a lot.” Neither Sarah or I have experienced this particular misfortune, but we’ve had other, less traumatic problems with our Roombas.

“My old roommate had a Roomba that ran into my mirror,” Sarah told me. “The mirror toppled over and broke.”

One day, my Roomba got ahold of a spool of thread. When I got home, it had unwound the entire spool and wrapped it around the cleaning brush roll. It took several minutes to get it unwound, and I had to throw away the rat’s nest of thread that was left.

I have a $399 Roomba 650. iRobot recently introduced a new high-end model, the $899 Roomba 980, which comes with a built-in camera, a longer-lasting battery, and other improvements. But as Fortune’s Kif Leswing pointed out in a review last October, these improvements only get you so far. The longer battery life doesn’t help if the dustbin gets full or your home has multiple levels. And the latest Roomba seems about as clumsy as its cheaper cousins — Leswing says it “beached itself on the legs of my Ikea Poang chair.” And it ate one of his cat toys, damaging one of the robot’s wheels.

Why it’s hard for smart appliances to add value

The Roomba is by far the iRobot’s most successful product. Over the years, the company has built a couple of mopping robots, a pool-cleaning robot, and a device for cleaning out your gutters. None of them have been big hits.

Other companies have tried to create internet-connected lawn sprinklers, crock pots, and lightbulbs.

A fundamental problem here is that for many tasks in the physical world, there just isn’t that much room for software and complex electronics to add value.

The home appliances that have done the most to improve people’s lives are the ones like dishwashers and washing machines that took a really time-consuming and tedious task and made it dramatically faster.

But in many cases, the preinternet devices in our homes are already pretty good. There isn’t a ton of room for further improvement. People don’t spend a lot of time adjusting their thermostats, so the better interface on a Nest Learning Thermostat doesn’t add a ton of value. Smart lightbulbs or robotic gutter cleaners seem even more like a solution in search of a problem.

THE PREINTERNET DEVICES IN OUR HOMES ARE ALREADY PRETTY GOOD

Machines add the most value when they can be operated at scale in a controlled environment — washing machines and dishwashers are useful because you can wash dozens of dishes or shirts at the same time. And because all the action happens inside the machine, there’s less room for unpleasant surprises — like a stray cat toy getting into the gears, or dog poop being spread across the floor.

In contrast, home robots and connected home devices are trying to operate in the chaotic and nonstandardized environment of a modern home. It’s an inherently more difficult problem to design a product that will work flawlessly in a wide variety of different home types.

And this is a reason to be skeptical that we’ll see rapid progress in household robotics or smart homes in the coming years. It has proven difficult to build a robot vacuum cleaner or a smart thermostat that’s a big hit with the public. And other home automation tasks — like iRobot’s mopping robots — have been even less popular than that. The concept of smart homes and cleaning robots sounds appealing in theory, but making it useful in practice is surprisingly difficult.