In 2026, TP reviewers continue to make outstanding contributions to the peer review process. They demonstrated professional effort and enthusiasm in their reviews and provided comments that genuinely help the authors to enhance their work.
Hereby, we would like to highlight some of our outstanding reviewers, with a brief interview of their thoughts and insights as a reviewer. Allow us to express our heartfelt gratitude for their tremendous effort and valuable contributions to the scientific process.
Piotr Jung, Children's Hospital of Philadelphia, USA
Aguan D. Wei, Seattle Children’s Research Institute, USA
Celeste Riepe, Stanford University, USA
Tebyan Afnan Rabbani, Stanford University, USA
Derek S Tsang, Princess Margaret Cancer Centre, Canada
Jeff M. Sands, Emory University, USA
Antti J Kukka, Uppsala University, Sweden
Vonita Chawla, University of Arkansas for Medical Sciences (UAMS), USA
Piotr Jung

Dr. Piotr Jung is a cancer biologist currently working at the Children's Hospital of Philadelphia (CHOP). He received his PhD as a Marie Curie Fellow at the University of Cambridge (Babraham Institute), where he studied PI3K signaling and protein–protein interactions in cancer. His earlier training includes work at Dana-Farber Cancer Institute in Dr. Peter Sicinski's laboratory, focusing on cyclins and CDK2 in cell cycle regulation. His research centers on understanding how extracellular signals and cell cycle machinery drive tumor progression, with a particular focus on neuroblastoma, an aggressive pediatric cancer. He has developed a three-dimensional tumorsphere model to investigate how matrix metalloproteinases (MMPs), growth factors such as EGF and FGF, and cyclin-CDK complexes regulate proliferation and metastasis. Alongside his research, Dr. Jung is deeply committed to mentoring undergraduate and post-baccalaureate trainees and integrating rigorous, hypothesis-driven science with inclusive education. Learn more about him here.
TP: Why do we need peer review?
Dr. Jung: We need peer review because scientists fall in love with their own data. After spending months—or more often years—on a project, troubleshooting failed experiments, defending it at lab meetings, and rewriting the figures a dozen times, it’s very easy to lose objectivity. By the time we submit the manuscript, we’re usually convinced it’s flawless. It’s our scientific “baby,” and in our eyes, it’s perfect. That’s exactly why peer review is so important. Fresh, independent experts can see what we no longer can—missing controls, overinterpreted conclusions, unclear figures, or logical gaps that slipped past us. They ask the uncomfortable but necessary questions. While receiving reviews can feel like opening exam results, you’re not sure you want to see, it almost always makes the work stronger. At its best, peer review isn’t about criticism—it’s about quality control. It protects the integrity of the field and ensures that what we publish is rigorous, reproducible, and actually advances knowledge. In the end, a slightly bruised ego is a small price to pay for better science.
TP: What do you regard as a constructive or destructive review?
Dr. Jung: Receiving reviews can feel soul-crushing—I know that firsthand. Opening that email with reviewer comments can raise your heart rate instantly. But over time, I’ve come to see that the tone of a review matters less than its substance. A constructive review is one that engages seriously with the science. It may be critical—even blunt—but it provides specific feedback, points out weaknesses in logic or methodology, suggests additional controls, or offers alternative interpretations. Even if it stings at first, it ultimately strengthens the manuscript and improves the quality of the work. Those are the reviews that make you pause, rethink, and sometimes run one more experiment—and later, you will be grateful you did. A destructive review, in contrast, is vague, dismissive, or overly personal. Comments that lack explanation, offer no actionable suggestions, or focus on undermining rather than improving the work are not helpful. Ironically, overly positive reviews that claim everything is flawless can also be concerning—they may suggest the reviewer didn’t engage deeply with the manuscript. In my experience, the most valuable reviews are not the nicest ones, but the ones that take the science seriously enough to challenge it.
TP: Is there any interesting story during review that you would like to share with us?
Dr. Jung: One of the most memorable parts of my experience with peer review has been living on both sides of the process. As an author, I’ve gone through several rejection cycles with very harsh reviews. At the time, it felt like an emotional rollercoaster—initial excitement, followed by disappointment, sometimes even frustration directed at anonymous reviewers who, in that moment, seemed unnecessarily critical. However, becoming a reviewer myself completely changed my perspective. I realized how much time and intellectual effort it takes to carefully read a manuscript, evaluate the data, check the logic, and formulate constructive feedback. A thoughtful review is not written casually—it requires real engagement with the science. Because of those experiences, I now try to be the reviewer I would have wanted during my toughest revisions. I aim to be precise and actionable. Instead of saying “improve the introduction,” I suggest specific concepts to clarify, key references to include, or even structural improvements like adding a graphical abstract to explain a complex mechanism. I also make a point to highlight strengths, not only weaknesses. Going through rejection and revision has made me a more empathetic author and a more responsible reviewer.
(by Ziv Zhang, Brad Li)
Aguan Daniel Wei

Aguan Wei, PhD, is a senior scientist at Seattle Children’s Research Institute, with prior faculty affiliations with Schools of Medicine at the University of Washington and Washington University in St. Louis. His primary scientific expertise lies in the molecular function of ion channels. With his colleagues in Seattle, he has expanded these interests to include central respiratory control, epilepsy and human genetics related to membrane excitability. Dr. Wei was born in Taiwan and grew up in the Midwest and Southeast of the USA. He graduated from the University of California-Berkeley (A.B.) and the University of Oregon (PhD). He received additional training at the Marine Biological Laboratory at Woods Hole and was a postdoctoral trainee at Washington University in St. Louis.
Peer review, in Dr. Wei’s view, is a critical component of self-correction in scientific publishing that draws on the collective wisdom of experts in the field to sustain a high quality of published reports. Beyond serving as a check for quality, the best reviews also provide constructive suggestions and perspectives for manuscript authors to consider.
Dr. Wei believes that inherent biases of reviewers, once recognized, should be explicitly stated in the review and to the journal editors. If these biases are judged severe enough to preclude a fair consideration, the reviewer should recuse himself/herself from the review. A second check on biases unfairly affecting an evaluation lies with the journal editor, who can request additional reviews upon appeal. Ultimately, he notes that the final check of all science is a collective effort and the ability of other laboratories to independently replicate the same observations.
Dr. Wei views peer review as a professional duty and responsibility, and a privilege contributing to the long tradition of scientific exploration. He finds that reviews, rather than a burden, can often offer an opportunity to learn about specific topics in greater depth than he may have had initially. He also emphasizes that everyone needs downtime for healthy reflection and judicial balance, and he personally enjoys walking through the beautiful city parks and neighborhoods in Seattle.
(by Ziv Zhang, Brad Li)
Celeste Riepe

Celeste Riepe, PhD, is a senior postdoctoral fellow in the Kopito laboratory in the Department of Biology at Stanford University. She specializes in using state-of-the-art genetics and genome editing methods to better understand why some people with cystic fibrosis fail to respond to modulator therapies. A native of San Marcos, TX, Dr. Riepe received a BA in Biochemistry and Cell Biology from Rice University in Houston, TX, and a PhD in Molecular and Cell Biology from the University of California, Berkeley, where she worked in the laboratories of Nicholas Ingolia and Jacob Corn to characterize changes in protein synthesis and ribosome composition after Cas9-mediated genome editing. Dr. Riepe has been awarded the Cystic Fibrosis Foundation’s Path to a Cure Fellowship and Cystic Fibrosis Research Institute’s Elizabeth Nash Memorial Fellowship for her work on cystic fibrosis. Connect with her on LinkedIn.
TP: Why do we need peer review?
Dr. Riepe: Science is an inherently human process: we make mistakes, we are blinded by our biases, and we fall prey to our anxieties about funding and career advancement. Although an imperfect system, peer review helps guard scientists and the public from misguided or dishonest science that wastes precious time and funding. For peer review to remain an effective safeguard, it must remain blind—this enables the authors and reviewers to focus more on the science and less on the politics and prevents quid-pro-quo science in which authors are favorably reviewed in exchange for favors down the line.
TP: What do you regard as a constructive/destructive review?
Dr. Riepe: I strongly believe in providing actionable criticism—I see my role as a reviewer to provide authors with strategies for improving their science. When I receive a manuscript, I jot down my problems with the manuscript then I sit back and try to come up with feasible solutions. The tricky part is translating those solutions into review questions and commentary that nudge the authors in the right direction.
The most destructive types of reviews are 1) when reviewers insult or antagonize authors on personal grounds, 2) when reviewers reject the claims of a manuscript without providing any clear rationale, and 3) when reviewers ask for tangential experiments that have nothing to do with the science at hand. I find that these reviews are deeply inconsiderate of the time and energy of the authors.
(by Ziv Zhang, Brad Li)
Tebyan Afnan Rabbani

Tebyan Rabbani is a pediatric transplant hepatologist fellow at Stanford University, where his work has focused on improving the early diagnosis and outcomes of infants with cholestatic liver disease. His research centers on biliary atresia screening, implementation science, and ultrasound-based biomarkers to accelerate diagnosis and reduce disparities in care. He has led multi-site initiatives implementing direct bilirubin newborn screening and collaborative ultrasound protocols across several institutions, and he is involved in international efforts to standardize evaluation of infants with cholestasis. Dr. Rabbani’s broader interests include quality improvement in pediatric liver disease, transition of care for children with chronic liver conditions, and mentoring trainees in clinical research and systems-level innovation. He is committed to bridging the gap between discovery and bedside practice to ensure that advances in pediatric hepatology reach patients sooner and more equitably.
TP: Why do we need peer review?
Dr. Rabbani: Peer review is one of the few structured ways our field collectively protects the integrity of scientific knowledge. It serves as both a quality filter and a refinement process—ensuring that new findings are methodologically sound, clinically meaningful, and interpreted responsibly. Importantly, peer review is not meant to be a gatekeeping exercise but a collaborative one. The best reviews improve a manuscript’s clarity, strengthen its analysis, and help authors anticipate how their findings will be used in real-world clinical settings. In medicine, where research directly informs patient care, thoughtful peer review is essential to maintaining trust in the literature and to preventing premature or misleading conclusions from shaping practice.
TP: What reviewers have to bear in mind while reviewing papers?
Dr. Rabbani: Reviewers should approach manuscripts with intellectual rigor and humility in equal measure. A review should ask: Is the question important? Are the methods appropriate? Are the conclusions supported by the data? But beyond critique, reviewers should aim to help authors produce their strongest possible work. Constructive feedback, specificity, and respect for the effort behind a study go a long way. It is also important to recognize context—differences in resources, patient populations, and study design constraints across institutions. Ultimately, reviewers should remember that their role is to advance the science, not simply to judge it.
TP: Data sharing is prevalent in scientific writing in recent years. Do you think it is crucial for authors to share their research data?
Dr. Rabbani: When feasible and ethically appropriate, data sharing is increasingly important. It allows for validation of findings, secondary analyses, and broader collaboration—particularly in fields like pediatric hepatology where individual centers may see relatively small numbers of patients. Shared data can accelerate discovery, reduce duplication of effort, and improve transparency. At the same time, data sharing must be done responsibly, with attention to patient privacy, data governance, and appropriate interpretation. The goal is not simply openness, but meaningful collaboration that advances patient care and scientific understanding. When done thoughtfully, data sharing strengthens both the credibility and the impact of research.
(by Ziv Zhang, Brad Li)
Derek S Tsang

Dr. Tsang is an Associate Professor in the Department of Radiation Oncology at the University of Toronto. He is a radiation oncologist at the Princess Margaret Cancer Centre and Hospital for Sick Children in Toronto, Canada. He completed his medical training at Queen’s University, followed by residency at the University of Toronto. He obtained fellowship training in paediatric radiation oncology at St. Jude Children’s Research Hospital in Memphis, Tennessee, and holds a Master’s degree in clinical epidemiology from the Harvard T.H. Chan School of Public Health. Dr. Tsang’s research interests include evaluating re-irradiation for recurrent tumours and reducing the late effects of radiotherapy. He participates in international cooperative group studies with the Children’s Oncology Group and NRG Oncology, and also serves as an Associate Editor for the Red Journal and sits on the Editorial Board for Neuro-Oncology. Learn more about him here.
Dr. Tsang says that peer review is a system that works when contributors to scholarly knowledge (authors) also participate in ensuring the integrity of new knowledge, as peer reviewers. In the current academic world, there are heavy demands on everyone’s time, and one can be tempted to just hit “decline” on a peer review invitation. However, there is an unwritten social contract within academia that those whose knowledge and skills permit them to write a high-quality review should indeed do so. In his view, if every academic abides by this principle, peer review will continue to succeed and thrive.
Dr. Tsang stresses that it is very important for authors to disclose their conflicts of interest (COIs). He notes that COIs can be real or perceived; the key is transparency. Authors should err towards disclosure when there is uncertainty. It can then be left to editors, reviewers, readers, and the scientific community as a whole to decide how to interpret the reported results in the context of disclosed COIs.
“There is a need for academic institutions to recognize the time dedicated by peer reviewers. Without peer review, the publishing ecosystem would cease to function. I recognize this important point; just as others have donated their time to review manuscripts that I write and submit, I have a duty to serve as a peer reviewer for manuscripts that I am qualified to review.
It is important for hospitals and universities to acknowledge and reward peer reviewer duties; perhaps participation as a peer reviewer should be formally recognized within academic promotion criteria,” says Dr. Tsang.
(by Naomi Hu, Brad Li)
Jeff M. Sands

Dr. Jeff M. Sands is the Kokko Professor Emeritus of Medicine at Emory University and the Chief Medical Officer of NephroDI Therapeutics. Dr. Sands’ research focuses on the molecular physiology of urea transporters, aquaporins, and the urine concentrating mechanism, and the translation of these basic research findings into novel therapies for nephrogenic diabetes insipidus. NephroDI Therapeutics is developing a small-molecule therapeutic for congenital nephrogenic diabetes insipidus, which is a pediatric orphan disease. Dr. Sands has authored over 180 peer-reviewed manuscripts, 105 invited reviews or book chapters, and has co-edited a book. Dr. Sands has given 40 invited talks at national or international scientific meetings and over 110 invited lectures at other U.S. or international universities. He has received several honors, including the Homer W. Smith Award from the American Society of Nephrology in 2022; and the Robert W. Berliner Award from the American Physiological Society Renal Section in 2025.
Dr. Sands stresses that peer review is essential for ensuring the quality of research publications. It provides assurance that experts in the field have reviewed what is in the publication and concur with the findings and that they support the conclusions. It also provides a service to authors, as peer review offers feedback on the work and often leads to better publications.
From a reviewer’s perspective, Dr. Sands emphasizes that it is important for authors to follow reporting guidelines. These guidelines provide standards for what should be included in a manuscript. This is crucial for the readers as they consider implementing the manuscript’s findings into their own research or clinical care.
“Serving as a peer reviewer is both an important service and an educational opportunity. One often learns by reviewing manuscripts about how to improve the quality and clarity of one’s own writing,” says Dr. Sands.
(by Naomi Hu, Brad Li)
Antti J Kukka

Dr. Antti J Kukka is a general paediatrician working at Gävle Regional Hospital, Sweden. His research affiliation is with Uppsala University, Sweden, where he recently completed his PhD with the title “Surviving Birth and Thriving: Identifying infants at risk of death and disability in low- and middle-income countries”. He has a broad interest in Global Child Health and is currently on assignment with Doctors Without Borders (MSF) in South Sudan. Visit Dr. Kukka’s homepage and ResearchGate for more information.
TP: What do you consider as an objective review?
Dr. Kukka: An objective review is an attempt to put aside your own personal preferences when assessing the merits and shortcomings of a manuscript. We all view the world through our own subjective lens, so I do not believe that it’s possible for a reviewer to be completely objective. When reviewing a paper, I try to be clear with the authors if my queries are based on my own preferences or prejudices, or about factual errors.
TP: What do you regard as a healthy peer-review system?
Dr. Kukka: A genuinely healthy peer-review system would start with assessment of the study plan before even sending in the ethical application and conclude with post-publication review of the already conducted research. This is, of course, unrealistic within our current level of resources, although I am excited about the potential of platforms like PubPeer in widening the definitions of peer review. When it comes to the traditional style of peer review with editors sending manuscripts to experts within the field, a healthy system would provide a timely assessment of the research from both subject matter and methodological experts. I believe that having a possibility for several rounds of review is beneficial to the final product, albeit delaying the publication.
TP: Is there any interesting story during review that you would like to share with us?
Dr. Kukka: I was recently asked by a journal, whose name I will omit, to review a manuscript on neonatal encephalopathy based on the Global Burden of Disease data. The manuscript itself was well written, but some of the results seemed to make no sense when compared to previous iterations of the same study. What put me off was referencing. Many of the background citations referred to studies with only a tangential connection to the topic. The highlight was “Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER statement,” being supported by a reference to a study about ancient Japanese hunter-gatherers!
I brought my suspicion of AI misuse to the attention of the editor, who, to my dismay, responded that the paper had gone through an initial AI check and would need to be rejected on the basis of methodological weaknesses rather than suspicion of misconduct alone. I completed my review recommending rejection, and was relieved to find out that the paper was indeed rejected. Soon, AI-written papers will be reviewed by AI software without any human input—surely that can’t be healthy for science or peer review!
(by Naomi Hu, Brad Li)
Vonita Chawla

Dr. Vonita Chawla, MBBS, is a board-certified neonatologist and Assistant Professor of Pediatrics in the Division of Neonatal-Perinatal Medicine at the University of Arkansas for Medical Sciences (UAMS), practicing at Arkansas Children’s Hospital. Her academic work focuses on quality improvement, clinical research, and systems-level innovation to advance neonatal outcomes. As NICU Quality Lead, she has led impactful initiatives including reducing time to hyaluronidase for IV extravasation injuries, and advancing neonatal vascular access and perioperative transfusion practices. Her research covers congenital heart disease, hypoxic-ischemic encephalopathy, and necrotizing enterocolitis, with contributions to multicenter studies on practice variation, neuroprognostication, and disease mechanisms. She is an active investigator in the Children’s Hospitals Neonatal Consortium and site principal investigator for major clinical trials, with a national profile in data-driven, scalable neonatal care improvement.
Dr. Chawla thinks that while peer review is foundational to scientific integrity, it faces major limitations including time delays and inconsistent reviewer perspectives. Extended review timelines slow the spread of clinically critical findings, especially in fast-moving fields like neonatology and quality improvement where timely translation matters greatly. There remains a need for more efficient, standardized, and adaptable approaches to evaluating scientific contributions while preserving the fundamental strengths of peer review.
Dr. Chawla believes that an effective reviewer combines objectivity, insight, and intellectual openness. Key is unbiased assessment focused on scientific merit. Strong reviewers also take a visionary approach, recognizing a study’s broader potential to shape clinical practice, policy, and future research beyond its immediate results.
According to Dr. Chawla, an Institutional Review Board (IRB) acts as essential ethical gatekeepers, ensuring human subjects research follows ethical and regulatory standards. Without IRB oversight, there would be a rise in low-quality and unethical research, severely eroding scientific integrity and putting research participants at risk.
(by Lareina Lim, Brad Li)

