Stressing for success

Getting anywhere in academic science — from securing funding to progressing your career — is a tough slog. We need to reevaluate what it means to be a successful researcher.

 
Illustration by Kayla Oliver

Illustration by Kayla Oliver

 

Overt Analyser is a monthly column by Chloe Warren that reflects on her experiences as a twenty-something scientist. Chloe is a PhD student in medical genetics at the University of Newcastle and really thinks too much about most things.

Since 1948, the dawn of the Golden Age of Advertising, the Jimmy Fund has raised over US$940m for the Dana-Farber Cancer Institute in Boston, Massachusetts. The fund marked the beginning of the oddly overlapping relationship between marketing and research. 

The campaign was the first of its kind – the first time a collective effort was established in order to raise money for a specific medical research cause. Like many successful campaigns, celebrity endorsement, community and sports partnership, media coverage and emotional branding have made the Jimmy Fund what it is today: namely, a fruitful marketing concept that still succeeds in raising millions of dollars annually for cancer treatment and research.

In 2013, the number of registered charities in Australia rose from 1,200 to 2,000. Though I will admit that I haven’t actually personally investigated each of these, I’m pretty sure that all of them are doing great work and they no doubt deserve your hard-earned money. But while there is always going to be a demand for donations, as there is always more work to be done, more charities means that a higher proportion of donations will go towards administrative start-up costs, as money is ‘thinned out’ over a greater number of organisations. 

This pattern is also occurring in the UK, where concerns are being raised about aggressive fundraising techniques. Recent amendments to the UK Charities Bill will see charities obliged to demonstrate how they will seek to protect vulnerable people from the effects of such practises. Charities have also been accused of failing to speak out against unethical or poor practise of corporate donors for fear it may damage their working partnerships and lead to a decrease in future donations. 

Science should be allowed to speak for itself. Instead, researchers have become part of a system where salesmanship, marketing and a self-centred spirit are instead integral to the job.

These are just some of the consequences when demands are too high and institutions, as well as individuals, are put under pressure. Insistence on an ever-increasing output can lead to inappropriate marketing tactics and corruption. Missions are seemingly abandoned and core values are degraded as people struggle to compete in an environment with an increasing number of goals and deadlines.

As you may or may not be aware, researchers across the country are frantically writing away as we near the end of the annual NHMRC (National Health and Medical Research Council) and ARC (Australian Research Council) grants round time. Just 18% of ARC applicants can expect to be successful, whereas only 15.5% of NHMRC applicants will be granted funding. So when scientists are destined to exist in this hyper-competitive working space, what happens to their core values over time? What happens to the values upheld by Australian science? Most researchers are not paid a salary by their research institution, but instead must rely on funding from an external body. This can mean applying for government (like NHMRC or ARC grants) or charity money, finding a generous philanthropist, or even crowdfunding. But however researchers find their income (as well as project money for lab consumables, facilities, staff, students, administration and travel), they have to be able to persuade someone — or, more often, a committee — that they are worth it. 

Like any job application, when researchers apply for funding, they have to demonstrate that they (and their research) fit the bill. That can mean, depending on the funding available: being translatable to real-world practise, being on a specific topic, being original, fitting in with an existing project, engaging with the research consumer, having access to specific facilities or equipment, or being seen to have an active involvement in public engagement. These are all fair enough requests, but scientists also have to be able to demonstrate their worth besides all of these factors. In the world of academic research, your worth is usually approximated by the number of scientific articles you have published, multiplied roughly by a factor of how great you are at selling your ideas. Successful (read: employed) scientists must therefore be able to sell their ideas and research. 

In a recent blog post, postdoctoral researcher at the Smithsonian Tropical Research Institute and University of Toronto, Dr Gerald Carter takes stock of some of the blatant discrepancies that exist between being a good scientist and being a successful researcher. 

Modern researchers are forced to publish as many papers as they can, as quickly as they can. Tobias von der Haar/Flickr (CC BY 2.0)

Modern researchers are forced to publish as many papers as they can, as quickly as they can. Tobias von der Haar/Flickr (CC BY 2.0)

Science is slow. Or, as put by Dr Carter, quoting Dr Bennett Galef, “Science is a marathon, not a sprint.” Replication should be encouraged and deliberation over conclusions should be so rigorous that it’s almost painful. Success in academia requires a regular churning out of papers, which requires a constant churning out of data. Sometimes the quality of the data will suffer as a consequence. 

Science is impartial. Although hypothesis-driven research is a pretty effective strategy for answering a question, a good scientist should never have any personal attachment to a hypothesis. This is in contrast to what often happens when a scientist is forced to ‘sell’ these hypotheses for the sake of their career. In his blog, Carter writes, “…being a successful academic, like being a successful business, means having a successful brand that gives people confidence in what you say.“ In this context, reviewers of publications may stick so steadfast to their ‘brand’ that they reject papers on the grounds that they contradict their ideas (and deflate their ego).

Science is a conversation. Open access databases and publications are becoming more popular, and for good reason: these resources generate “…more transparency…and easier replication”. However, the concept that sharing data can be a damaging practise is still a prevalent one. This was highlighted in a recent New England Journal of Medicine editorial, wherein the author expressed concern that ‘parasitic researchers’ may ultimately, ‘…even use the (open access) data to try to disprove what the original investigators had posited.’ Gosh, imagine having your data cross examined by an independent body and held up to reproducible standards.

Sharing data is essential to much modern research, but is still frowned upon in some parts of the academic community. mcgarrybowen london/Flickr (CC BY 2.0)

Sharing data is essential to much modern research, but is still frowned upon in some parts of the academic community. mcgarrybowen london/Flickr (CC BY 2.0)

Science should be allowed to speak for itself. Instead, researchers have become part of a system where salesmanship, marketing and a self-centred spirit are instead integral to the job. Although some funding bodies are now stipulating that collaboration should be part and parcel of the research process, too often the fear of ‘losing out’ to competition stops researchers from helping each other with their work, or even just sharing ideas.

Science is up for debate. So why are the standards for success in academia so stale?

The truth is, no one really has the answer, and it’s not just a problem in Australia. Research funding will always be a competitive process, so there will always be losers. But how do we stop this competitive environment from eroding the very nature of scientific practise?

A readjustment of metrics, wherein emphasis on publication output is decreased, is one idea. Public engagement is a growing area of government and industry interest, as investors and taxpayers are keen to see palpable and wide-reaching outcomes from their investments. Of course, designing metrics that accurately measure this is a complex enough issue on its own. 

The ex-Chief Scientist of Australia, Professor Ian Chubb, has suggested university funding be linked to the number of “startups” it produces, aiming to increase entrepreneurship within the country. His 2015 report, Boosting High Impact Entrepreneurship in Australia, described concerns that the current system encouraged academics to “focus their efforts on producing publications rather than on engaging with industry or teaching…” Given that shoe-horning academia into a capitalist system is technically the reason we’re in this mess already, I for one am not sure quite how effective such measures would be. 

The Australian Early and Mid-Career Researcher Forum (EMCR) also have a couple of ideas. This subgroup of academics is especially vulnerable in the current system, as they must carefully balance a tightrope of dependence on more senior researchers for support, as well as clearing their own ground for independent research projects and a career. “The pressure to publish is a systematic issue that inevitably leads to short cuts, rushed data analysis and even misconduct,” said Dr Nikola Bowden, chair of the EMCR. “By changing the focus to innovation, industry engagement and application and impact of discoveries, we can return to a scientific sector focused on science rather than metrics”. 

It’s true that in the right context, competition can be a great way to spur on success. But while science is by definition a rigorous discipline, requiring patience and an open mind, competition can make it vulnerable to corruption. If we are to tackle and police this corruption, the standards within academic practise need continuous review and improvement. 

Edited by Jack Scanlan