Report from Meeting on Career Outcomes
On Monday, August 7, 2017 a small group gathered for a meeting on improving career transparency in biomedical Ph.D. career outcomes. The meeting was sponsored by Rescuing Biomedical Research and brought together some BEST institutions as well as the AAU, AAMC, and some non-BEST institutions. Four BEST members attended the meeting and share their reflections. The following is report is from Abby Stayart (University of Chicago), Ambika Mathur (Wayne State University), Roger Chalkley (Vanderbilt University School of Medicine), and Patrick Brandt (University of North Carolina Chapel Hill), and does not necessarily reflect the opinions of the BEST Consortium.
The meeting focused upon how we should advertise our outcomes in graduate and/or postdoc education. Ostensibly we were to focus on the range of careers currently available to individuals engaged in training in Biomedical Research (ie. the taxonomy), though some graduate school deans at the meeting are now expanding the coverage to all of the programs under their aegis. However, in order to report on such a taxonomy of careers one has to collect the details of what careers are actually chosen by the students/postdocs. This then becomes outcomes analysis. Finding out just what the previous trainees actually do for a career can be a daunting task, and just how to do this was discussed at some length.
This report is divided into two parts. First, how can one follow the career pathways of our previous trainees?, and second, how should we assign a specific career to a commonly accepted taxonomy?
In general there seems to be an assumption that collecting the outcomes data would be expensive and difficult. And that this provides hurdles, which are almost prohibitively hard to overcome. However, a major outcome from the meeting was that these concerns are perhaps exaggerated and mostly can be overcome, and strategies have already been defined which point the way to a highly effective and relatively inexpensive approach. It was extremely interesting to see the four BEST program representatives present were able to describe simple and direct approaches which they had developed in essence as experiments which now are yielding fruit. The common strategy is to start with individuals who are leaving the graduate or postdoc trainees on a regular basis. It is easy to identify the bulk of the initial plans, which form the starting basis for the subsequent tracking. Subsequently it was fascinating to see that all the programs reporting on outcomes and tracking in essence use the same approach, which involves a range of approaches, but always a sizable degree of internet stalking. All reported how they use direct email to a known email address, and this produces data for about 20-30 % of trainees, then one can probe faculty awareness of the location of their students (10%), checking training grant tables can help with 5-10%, but the biggest return (over 50%) comes from Linked In, Google, PubMed and on occasion, from other alumni who have stayed in touch with friends and colleagues.
Whatever the approach, all of the schools represented have attacked this need and uniformly they see great success with >90% of alumni identified. All reported that much of this strategy can be implemented inexpensively using part time or work study students, provided that the students are carefully briefed in terms of the meaning of possible outcomes with which the student may not be comfortable, and all programs had identified mechanisms to this end. In this way, several programs have now begun to track the career outcomes of as many as 1000 ex-trainees out through at least 15 years. It was gratifying to see the BEST programs collectively leading the pack, though the University of Michigan is also a leader in this regard, along with Postdocs from UT Southwestern.
The overall result of this exercise though was that this can be done, that it is not overly complicated or expensive. The funding from the NIH for the BEST programs was clearly sufficient to get this process underway at those institutions, even though this aspect of the BEST programs actually utilizes only a relatively small amount of the overall commitment from each program, perhaps to the tune of about 30% of 1FTE overall.
These teams all reported that they catch up with their alumni every two years.
Adoption of a unified Career Taxonomy
In discussion of taxonomy, at all times we must remember that the goal is to develop a standardized and well-defined set of categories that institutions may choose to adopt in whole or in part, with the hope that the standardization will help us to report our outcomes and mine national data sets. While we cannot expect absolute compliance, we believe that it is possible to distribute a recommendation of standard categories and definitions. Within those categories, schools may choose to lump or further dissect.
Of the 3 subgroups formed within the RBR meeting, the “data presentation” group was tasked with assessing taxonomies that already existed in the national conversation and proposing a final unified taxonomy of career outcomes. As a starting place, the subgroup was offered the BEST working group’s product which itself had reworked, evolved, and add to the career categories originally included in the Science Careers myIDP, renaming them as “Job Functions”. The BEST group had also seen the value of adding a top-most “Workforce Sector” tier (including Academia, Government, Nonprofit, and For-Profit) into which all career categories would fall with the idea that we could emphasize the utility of a PhD throughout the workforce and beyond academia. The ‘Workforce Sector’ tier was directly adopted from the extensive work of the career development group at UCSF. This two-tier BEST working group proposal was initially introduced to the community at FOBGAPT2 and GCC. Essentially, the RBR working group combined the BEST working group’s Job Functions and UCSF’s pre-existing two tier system that includes “Workforce Sector” and, below that, “Career Type” (including “primarily research”, “primarily teaching”, “science-related”, “unrelated to science”, and “further training or education”. The resulting 3-tier system creates a lovely progression with increasing granularity as the system steps through Workforce Sector à Career Type à Job Function. As mentioned above, individual schools may choose to collect all tiers or a subset of them. The addition of the Career Type tier offers a very broad roll-up of the Job Functions, which will prove useful to administrators who cannot identify the Job Function of an alum but could easily bin them into one of the broader Career Types.
In addition to the inclusion of a middle tier, the RBR taxonomy differed from the BEST working group’s taxonomy in a few notable ways which we will need to consider our willingness to adopt or debate. The first is a proliferation of Job Functions related to teaching and research in order to capture the type of teaching and the tenure track status of the individual. In spite of the BEST working group’s attempt to maintain as few Job Functions as possible, we suggest that we yield on this point since among other things, the delineation is, in fact, of interest to a significant portion of our population. It does result in a Job Function list that has 26 options instead of 20. At an institutional level, schools may choose to conflate Job Functions in order to simplify their collection or the presentation of their data.
The second difference between the BEST group’s product and the product adopted at RBR is separating ‘postdoctoral researcher’ from the broader description of ‘pursuing further training or education’. Since one of our primary missions here is to highlight the large part of our population that occupies this temporary training position, we all acknowledge the worth in making that separation and we suggest that we adopt it. In practice, that separates our research postdocs from PhDs who are going to business, law, or medical school. That being said, at this time the RBR taxonomy has placed ‘Postdoctoral Research’ in the ‘Career Type’ tier, alongside (and separate from) ‘Primarily research’, ‘Primarily teaching,’ ‘Science-related’, etc. After significant thought and discussion with the BEST Steering Committee and others, we suggest that ‘Postdoctoral Researcher’ should exist in the list of Job Functions, alongside other academic positions such as Faculty, Instructor, and Adjunct. we would like to gauge the BEST working group’s opinion about this so that we can represent our group in near-term conversations; we do not know how strongly the RBR group feels aligned with ‘Postdoctoral Researcher’ remaining in ‘Career Type’, so there may not be any discordance.
In summary, regardless of placement of postdoctoral researcher, we can confidently say that the BEST working group contributed greatly to the final outcome of this collaborative taxonomy. We have received permission from the organizers of the RBR meeting to represent the taxonomy in a poster at the GREAT meeting. The whole group is excited for the roll-out and anticipate that it will be referenced in multiple formats (posters, workshops, white papers, publications). We can confidently begin implementing a taxonomy that many other groups have endorsed and disseminating this taxonomy as broadly as possible so that schools nationwide and beyond can choose to adopt or abstain from a nationally-embraced taxonomy.
A non-trivial element of the practical application of the taxonomy is in what format it will be presented. We anticipate that some schools will include it in alumni surveys and will build in logic arguments so that individuals who select the Career Type “Primarily Research” may be piped to a list of Job Functions that is a subset of the full list of 26. In presentations where logic arguments are not possible or practical (an Excel spreadsheet for example), the presentation may be complete lists that are navigated by an administrator.
Questions towards the end of the session.
A question was addressed to those schools which had established what seem rather robust programs for recording outcomes. This concerned how these programs initiated and then implemented such career tracking. A lot of discussion ensued. However, to a degree the drive to establish these reports came from several common sources of inspiration. First, the NIH has played a key role. This comes from the need to collect some of these data items for the training grant applications. A second drive came from the increasingly wide acceptance that institutions should collect and publish their career outcomes. In part, of course this satisfies the transparency needs for truth in recruiting and is directed toward new or incoming students/postdocs. In addition, such an analysis can help Career Offices plan the direction of their support programs depending on changes in career interest among their trainee constituencies. Once more the BEST programs seem to have been well in the vanguard.
An additional question concerned how the acquisition was initiated for those institutions which have had considerable success in this area. Most, but not all of the programs which are now leaders in this area had some previous, tentative activity in this arena, especially for graduate students. However, the notion seemed to be that once a programmatic need was perceived, then in the absence of opposition from higher administration and with modest funding the teams were able to get started. Of course, in some cases the BEST funding clearly provided the necessary push, and indeed all of the 17 BESTies have now established operations in this domain. We were asked how one might begin if there were minimal activities in this regard at a given institution, and what would it take to start up. Again, the assessment from those schools who have these programs was that it would take efforts from someone who understood the educational environment (possibly, but not essentially recruited from the Postdoc pool?). And such an individual might use available database structures and survey tools…RedCap, Qualtrics, Survey Monkey etc., purchased at a modest cost (or no cost), and in the first year might expend a great deal of effort, but in subsequent years much reduced. Some coding skills might come in handy, but are not essential.
All of us reported that we had run into essentially no push back by either faculty or senior administration. In all likelihood this will be the case at many institutions, though possibly some may be happy to get the data, but reluctant to publish it. We also discussed the need for IRB approval, and again the general opinion was that you might as well do this. It is usually not a difficult step, and certainly if you plan to publish papers in this field approval is non-negotiable. It is worth noting that IRB approval for this type of activity can be granted retroactively if a decision to publish is made.