What Went Wrong?

Start your day with TPM.
Sign up for the Morning Memo newsletter

TPM Reader TC chimes in from the world of medical research …

I’m a health researcher and deeply involved in similar work- aggregating data from large electronic health record databases. Many large medical centers care for ~1M patients per year, but even with all the COVID cases, one needs to aggregate ‘like with like’ data across multiple databases. There are standard informatics and statistical reasons to do this. So the overall methods are actually similar to several large national projects currently being stood up by NIH, CDC and PCORI, among others.

I’ve been impressed with the dedication of the analysts and researchers. While we can work quickly, seeing that study put together in a matter of a few weeks harmonizing data across hundreds of hospitals did not meet a credibility test. Even more odd was that this was a company that lots of folk in the field was not familiar with. Not being snobbish, but this is not a huge field. There are standards for transparency and data harmonization and the study seemed to skate over many of them.

At one level, the system worked in that researchers quickly identified problems. But, could it have stopped at peer review? If the reviewers weren’t familiar with ‘federated data methods’ that may have accounted for the lack of rigor in review. Your point about the pressure to ‘get the information out there’ is a good one.

I am sure there will be a lot of discussion about this, as well as focus on having methods standard for this type of research become much more explicit.

Latest Editors' Blog
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Associate Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: