Añadir esta página al libro
Eliminar esta página del libro
The article proposes to conduct a DESIGN of DEVICE in the ISSUE. The theme is valid and relevant, however, the methodology is flawed: it is not clear the search strategy used, inclusion and exclusion criteria, the data collected and how it was made the evaluation of the quality of the studies that are cited in the text. In addition, results are poorly described and little discussed. The early detection of caries lesions is cited in keywords, but the text addresses secondary caries lesions, including in conclusion.
Sometimes the results are placed in methodology section erroneously. Initially it is said to have included 14 studies and then 15, as well as other errors in the text.
Many clarifications and improvements should be made to the article can be considered for publication. In the absence of code, reproducibility falls back on replicating methods from textual description.
## AI models
Merely textual descriptions of deep-learning models can hide their high level of complexity.
## Rejection http://grigoriefflab.janelia.org/rejections
## Key questions
- Is the question posed by the authors well defined?
- Are the methods appropriate and well described?
- Are the data sound?
- Does the manuscript adhere to the relevant standards for reporting and data deposition?
- Are the discussion and conclusions well balanced and adequately supported by the data?
- Are limitations of the work clearly stated?
- Do the title and abstract accurately convey what has been found?
- Is the writing acceptable?
## Answer to reviewers Thank you for the additional review of our manuscript. We appreciate the time and effort that it takes to review these types of studies. We have revised our manuscript per the Editorial request. Please find a point by point description of those changes
The _ _ _ _ _ _ _ has been significantly expanded within the _ _ _ _ _ _ _ section
Respectfully we would prefer not to _ _ _ _ _ _ _ in this fashion. Given _ _ _ _ _ _ _ we are concerned that this _ _ _ _ _ _ _ would be over-stating the results. The main goal of this manuscript is to _ _ _ _ _ _ _. Thus we would like not to present the data in this fashion.
Thank you, we completely agree that this was _ _ _ _ _ _ _ . This conclusion has been changed to convey a better message.
# Reviews examples
## ACCEPT The authors have dealt with my remarks adequately and to my full satisfaction. Also the remarks of other referees were sufficiently taken into account. I have no further objections to accept the manuscript.
### REJECT GENERAL
This manuscript lack complete and transparent reporting of biomedical and biological research. I strongly recommend that authors refer to the minimum reporting guidelines for health research hosted by the EQUATOR Network (http://www.equator-network.org/) . Specifically recommended for this research is
- Randomized controlled trials (CONSORT) and protocols (SPIRIT)
- Systematic reviews and meta-analyses* (PRISMA) and protocols (PRISMA-P)
- Observational studies (STROBE)
- Case reports (CARE)
- Qualitative research (COREQ)
- Diagnostic/prognostic studies (STARD and TRIPOD)
- Economic evaluations (CHEERS)
- Pre-clinical animal studies (ARRIVE)
Justification of the appropriateness of the statistical test used is lacking. A full description of the statistical test and appropiate report of this results are lacking. Please refer to SAMPL guidelines for more information http://www.equator-network.org/reporting-guidelines/sampl/
## REJECT Reviewer: 1
Comments to the Author Overall this is a opaque presentation of means for and results of ranking the scientific production in “Dentistry.” of entire countries It is opaque both because the methods are only partially described, and the utility of the results are unclear. No case is built for the need for this information and its attraction for readersof a journal intended primarily for practicing general dentists is difficult to understand.
For example: The methods do not describe the SDImago Journal Rank either superficially or in detail. How does this ranking process identify the “most productive” journals?
The methods do not describe the strategy undertaken that allowed the researchers to retrieve substantially more documents than the SJR ranking, nor do they discuss thoroughly the effect of the addition of these additional documents.
The g index and hg index results are barely mentioned let alone discussed, although they appear in Table 1.
The basis for the conclusion regarding the h index is not clear. The discussion does not mount an argument for its “clear demonstration of the quality of a country’s scientific production in dentistry.”
In addition, several aspects of the presentation need greater explanation; What is a “document” in this context. Is the category broader than scientific articles? How does the h index address the issue of self-citation? Why are the two time periods used for comparison, 1976-2015 and 1994-2014, overlapped. Does that not compromise the analyses regarding change in rankings. The discussion presents several statements that need documentation.
Comments to the Author JADA #413-16: Scientific production in dentistry by country: an approach to quality using the h-index.
1. Rankings are one of those things that everyone loves to hate, unless, of course, one is ranked high. I understand why schools and colleges are interested in ranks and boast when the institution does well (attracting faculty and students with their tuition dollars). I understand why journals are interested in ranks (attracting high quality authors, more readers, and more advertising revenue). However, I do not see the purpose or value in comparing countries regarding their production and quality of dental research publications. Regrettably, the authors do not provide a rationale for conducting this analysis. I finished reading the manuscript and was left with the question, “So what?”
2. Hirsch (reference 1) proposed the h-index as a measure of an INDIVIDUAL’S research productivity and quality as one of several metrics to be considered when competing for finite resources (for example, tenure). Applying the h-index to country-level data is a rote mathematical exercise that does not take into account the conditions and nuances enumerated in Hirsch’s original paper. Yes, others have done this, but that does not mean that it is valid and that additional papers should appear.
3. Although generally well-written, the organization of the manuscript created some confusion. The abstract, introduction, and early paragraphs of the methods sections mention two time period (1972–2015 and 1996–2014) without explanation; this is especially confusing as the second time period is subsumed by the first. Not until the penultimate paragraph of the methods section is the reader told the rationale for the selection of these time period (and I will comment further in my specific comments below).
4. In some instances the authors refer to the “Web of Knowledge” and in others the “Web of Science.” The Web of Science is the current name of the database.
5. All of the comparisons of the quantitative output of countries (text and tables) are not very meaningful. Is it any surprise that the USA produces more papers that the UK given the differences in population size, which probably reflects a difference in the number of dental researchers producing papers. It seems to me that to be meaningful the quantitative output needs to be expressed as a rate. There needs to be some normalizing denominator: per capita, per 100 FTE dental researchers, per $10,000 of national funding for dental research, or something else.
1. Introduction, paragraph 8: The stated purpose of this study is “to analyze the changes in scientific production over recent history and also compare rates of production in different countries.” This manuscript ignores the first stated purpose. Lumping all papers published over a 44-year period (1972–2015) does not allow for an analysis of the changes over recent history. A bona fide temporal analysis would establish the secular trends (looking at the most recent 18 years in comparison to the entire 44-year period does not suffice). Indeed, if one believes that the h-index is a measure of the quality of the dental science produced, then seeing if the quality of sciences has improved with time would be far more interesting than comparing the quality of science across countries.
2. Results, paragraph 2, sentence 4: Two questions regarding the “strategic policies” applied by Belgium, Sweden and Switzerland. (A) What data reported in this manuscript support and lead to this claim? (B) What are the “strategic policies”?
3. Results, paragraph 3 and Table 3: What is the purpose of comparing the study results with the results based on the SJR? Is it to claim that the authors’ data have greater fidelity than the SJR data? Perhaps the additional 26,833 dentistry documents were exclude by SJR for a good reason. Perhaps including them actually compromises the authors’ data. Where is the comparison going?
4. Discussion, paragraph 3: Again, the authors refer to the “strategies” of the Nordic and Northern European countries to obtain high h-indices. I am very skeptical that the governments of these countries have given any thought whatsoever to elevating the h-index of the dental science literature published by resident authors. Except for acknowledging the strong scientific foundations of the Swedish and Swiss dental schools and that they are engaged in research that is particularly timely, the authors offer no examples of “strategic policies” to increase the h-index.
5. Discussion, paragraph 6: If the SJR indexes more dental journals that the Web of Knowledge [sic], then how did the authors obtain 26,833 additional dentistry documents from the Web of Knowledge compared to the SJR (as reported in the results and Table 3)? The explanation offered in the next paragraph of the discussion does not provide a satisfactory explanation. Moreover, if the h-index for an entire country over an 18-year period (1996–2014) can change dramatically based on a 9-month citation period (March 2015 versus December 2015), that raises questions about the stability of the h-index itself.
6. Figure 1: The histograms only show the relative comparison of the h-, g- and hg-indices within a country. There is no quantitative label of the y-axis of each histogram. Even if the y-axis are the same, it is very difficult to compare countries just by looking at the figure. Is the distribution and value of the indices the same for Brazil and Norway?
7. Table 2, right-hand column: What do these percentages mean? What was the denominator for calculating the percentages?
Comments to the Author Scientific production in Dentistry by country: an approach to quality using the h-index Manuscript ID: 413-16 This manuscript report the amount and three index of impact for dental research papers published between years 1972 and 2014. My recommendation is to reject this manuscript for publication, since there is a discrepancy in the aim, methods and results. The declared aim is to analyze the quality of dental research papers, the results point to the amount of papers published by country and the conclusion is that the h-index can be used to assess the relation between the amount of research and their quality. Also, is not clearly stated the meaning of this research for the clinician. Page 5 lines 24-39 This is part of the method. It’s not clear how the quality of a paper was measured. As stated in the manuscript, the h-index is a measurement that aims to describe the scientific productivity and impact of a researcher. Hence, it’s not clear in the manuscript the correlation or agreement between h-, g- or hg- indexes and quality. The method doesn’t describe with enough detail the process to measure quality. Page 7 lines 31-40 are part of the discussion. Most of the results are part of the discussion (page 7 line 53 “However…”, page 8 lines 10-25, lines 28-41). Some of the conclusions are not supported by the data, as “the h-index can be used as an indicator of quality”, since the is no clear assessment of the quality itself. From the results, it’s not clear the trend in time of the indexes (quality indexes?). It’s not clear if this research aimed to evaluate the performance of the databases used (SCImago) and how this output was validated with a manual check or other database. nuevos
Reviewer(s)' Comments to Author:
Comments to the Author Relevance: Implant planning radiography falls within the scope of JADA
Originality: Comparisons between panoramic and CBCT imaging are hardly original. This paper fails to review current recommendations for such imaging and does not measure the transaxial anatomy as this would be impossible with panoramic imaging.
Abstract: The abstract follows the flawed study design and wrongly concludes that panoramic dental imaging is sufficient for dental implant planning. It fails to factor the transaxial view and only considers vertical and horizontal measurements.
Introduction: The introduction fails entirely to review existing guidelines on dental implant imaging such as those available at www.aaomr.org and at the SEDENTEX-CT website. The introduction does not adequately review the literature and does not show a lack of current knowledge or a controversy in the literature that this study can remove. The rationale for this study is not achieved as the readily available literature has not been adequately surveyed.
Methods: This study fails from not mentioning that the results only apply to one specific panoramic system and one CBCT system among many. It also fails to review the importance of the transaxial view in dental implant planning that is possible with CBCT but totally impossible for panoramic radiographs.
Results: Compare only measurements possible with the Planmeca panoramic radiology system to similar ones on the CBCT images. Does not show measurements that can only be made with CBCT and are needed for accurate implant planning. For the individual the mean distortion measures are less important than accuracy in measurement in 3D for the specific individual. Scattergrams do show a number of outliers where using an average magnification correction would not be satisfactory.
Discussion: The discussion generalizes from a specific pairing of panoramic and CBCT system to all panoramic and CBCT systems and this is not acceptable. Furthermore there is no mention of the measurements that are impossible with panoramic radiography that are important for accurately planning implant placement. Panoramic imaging provides distorted 2D imaging and the real patient anatomy is 3D in nature. Dental implants are expensive as is there treatment needed to place them and the imaging costs are a minimal part of the overall costs. It is irresponsible to cut corners on accurate pre-implant imaging and potentially compromise such expensive procedures. Outliers to the magnification correction are not discussed in terms of their potential for causing serious untoward effects in the individuals they represent.
Conclusions: The conclusion is flawed as it is not possible to determine what cannot be seen without CBCT based upon clinical and panoramic inspection and this certainly does not only apply to patients with periodontal disease as suggested by the authors.
References: Many of the references are very dated, especially those for panoramic imaging most of which precede digital panoramic radiography by several decades. The CBCT references are also dated and fail entirely to include guidelines from various organizations that are readily available.
Figures: The OIPG image illustrated is of low density and poor contrast as a result.
Comments to the Author Authors present a comparison between Pan and CBCT to determine mostly the magnification factor of Pans in determining bone height. With this it is assumed that CBCT is taken as the gold standard. Unfortunately this is an error in itself because although CBCT is good in giving us that information, it still presents error and several articles have shown and reported this. Also, there are several articles in the literature presenting what to expect in terms of error with Pans. Taking CBCTs and Pans to patients is also a critical issue since you are exposing patients to extra radiation. The best way to have demonstrated the magnification is in perio compromised patients to take Pans and then do gingival flaps to verify the exact place of the bone. How authors are presenting this work it does not contribute significantly to the literature.
Comments to the Author This research report the findings of the accuracy of measurements in panoramic images compared to CBCT as gold standard.
Although this is an interesting topic and representative of a clinical problem, I suggest that this paper be rejected for publication on the grounds that it provides no new or relevant information.
The main issue with this report is the lack of an appropiate search of the literature. The magnification rate of orthopantomography (OPG) has been extensively reported previously (see references below).
The findings reported of this research can be predicted from the literature.
REFERENCES 1. Luangchana P, Pornprasertsuk-Damrongsri S, Kiattavorncharoen S, Jirajariyavej B. Accuracy of linear measurements using cone beam computed tomography and panoramic radiography in dental implant treatment planning. Int J Oral Maxillofac Implants. diciembre de 2015;30(6):1287–94. 2. El Hage M, Bernard J-P, Combescure C, Vazquez L. Impact of Digital Panoramic Radiograph Magnification on Vertical Measurement Accuracy. Int J Dent. 2015;2015:452413. 3. Flores-Mir C, Rosenblatt MR, Major PW, Carey JP, Heo G. Measurement accuracy and reliability of tooth length on conventional and CBCT reconstructed panoramic radiographs. Dental Press J Orthod. octubre de 2014;19(5):45–53. 4. Vazquez L, Nizamaldin Y, Combescure C, Nedir R, Bischof M, Dohan Ehrenfest DM, et al. Accuracy of vertical height measurements on direct digital panoramic radiographs using posterior mandibular implants and metal balls as reference objects. Dentomaxillofac Radiol. 2013;42(2):20110429. 5. Raoof M, Haghani J, Ebrahimi M. Evaluation of horizontal magnification on panoramic images. Indian J Dent Res. junio de 2013;24(3):294–7. 6. Nikneshan S, Sharafi M, Emadi N. Evaluation of the accuracy of linear and angular measurements on panoramic radiographs taken at different positions. Imaging Sci Dent. septiembre de 2013;43(3):191–6. 7. Devlin H, Yuan J. Object position and image magnification in dental panoramic radiography: a theoretical analysis. Dentomaxillofac Radiol. 2013;42(1):29951683. 8. Kumar MA, Mody B, Nair GKR, Surender LR, Gopal SS, Prasad RVKA. Dimensional accuracy and details of the panoramic cross-sectional tomographic images: an in vitro study. J Contemp Dent Pract. febrero de 2012;13(1):85–97. 9. Vazquez L, Nizam Al Din Y, Christoph Belser U, Combescure C, Bernard J-P. Reliability of the vertical magnification factor on panoramic radiographs: clinical implications for posterior mandibular implants. Clin Oral Implants Res. diciembre de 2011;22(12):1420–5. 10. Momjian A, Courvoisier D, Kiliaridis S, Scolozzi P. Reliability of computational measurement of the condyles on digital panoramic radiographs. Dentomaxillofac Radiol. octubre de 2011;40(7):444–50. 11. Langlois C de O, Sampaio MCC, Silva AER, Costa NP da, Rockenbach MIB. Accuracy of linear measurements before and after digitizing periapical and panoramic radiography images. Braz Dent J. 2011;22(5):404–9. 12. Hoseini Zarch SH, Bagherpour A, Javadian Langaroodi A, Ahmadian Yazdi A, Safaei A. Evaluation of the accuracy of panoramic radiography in linear measurements of the jaws. Iran J Radiol. septiembre de 2011;8(2):97–102. 13. Chuenchompoonut V, Ida M, Honda E, Kurabayashi T, Sasaki T. Accuracy of panoramic radiography in assessing the dimensions of radiolucent jaw lesions with distinct or indistinct borders. Dentomaxillofac Radiol. marzo de 2003;32(2):80–6. 14. Stramotas S, Geenty JP, Petocz P, Darendeliler MA. Accuracy of linear and angular measurements on panoramic radiographs taken at various positions in vitro. Eur J Orthod. febrero de 2002;24(1):43–52. 15. Wyatt DL, Farman AG, Orbell GM, Silveira AM, Scarfe WC. Accuracy of dimensional and angular measurements from panoramic and lateral oblique radiographs. Dentomaxillofac Radiol. noviembre de 1995;24(4):225–31. 16. Hayakawa Y, Wakoh M, Fujimori H, Ohta Y, Kuroyanagi K. Morphometric analysis of image distortion with rotational panoramic radiography. Bull Tokyo Dent Coll. mayo de 1993;34(2):51–8.
Comments to the Author JADA #326-15: A survey of oral and maxillofacial radiologists’ practice habits and attitudes toward state-based licensing: a harbinger of things to come.
1. This submission addresses an important issue and presents teleradiology as a case study of the larger issues that are facing and will continue to face teledentistry in general.
1. Abstract, Conclusion: The conclusions simply repeats what is said in the results section of the abstract. There is essentially no difference.
2. Materials and methods, paragraph 1: I am surprised and distressed that the Harvard School of Dental Medicine IRB would approve this survey as “not human subjects research.” The survey asks respondents if they engage in behaviors that may be illegal and expose themselves to criminal prosecution (at least according to the authors). Despite the fact that the survey was conducted “anonymously” it may still be possible to use a digital “paper trail” to link respondents to their answers (although it is unlikely that any enforcement agency would bother to do this). Hence, by participating in this survey, respondents have placed themselves a greater than “minimal risk.” Therefore, in my opinion, the Harvard School of Dental Medicine IRB erred in classifying this as “not human subject research.” This raises the ethical issue of whether the data collected in this survey should ever be published or made public.
3. Materials and methods, paragraph 2: (A) I went to the ABOMR website and could not find a publically accessible database with contact information for the Board Diplomates. All that I could find was a list of the names of the Diplomates by state. How did the investigators obtain the Diplomates’ e-mail addresses? (B) Assuming that the contact information is truly publically available, the authors need to provide the URL and access date in the text or as a reference.
4. Results, paragraph 1, sentence 2; and Table 1, first entry: (A) If the contact information came from the ABOMR website, then how is it possible that 11 of the 77 respondents are not Diplomates? This needs to be explained in the text. (B) There is a typographical error in the sentence – the correct percentage for the number of Diplomates is 86%, not 89.6%.
5. Results, paragraph 1, sentence 3; and Table 1, second entry: The numbers do not add up properly. We were just told that there were 66 confirmed Diplomates. In this sentence we are told that of these 92.8% currently read films. The percentage comes from the second entry of the table (reported there as 93%), but that percentage is based on a denominator of 69 Diplomates (64 reading films and 5 not reading films). How can there be more Diplomates reading films (n=69) than there are Diplomates in the sample (n=66)?
6. Results, paragraph 1, sentences 5 and 6; and Table 1, third and sixth entries: Again, there are inconsistencies between the text and the table. In the first of these two sentence we are told that there are 36 respondents (the 56%) that write reports for patients and/or dentists who reside in states that they are not licensed in. In the second of these two sentences we are told “these same  individual reported that there are no states for which they would not write reports. However, this does not agree with the sixth entry of the table, which shows that 15 out of 34 respondents would not write reports under these conditions.
7. Results, paragraph 1, sentences 7 and 8; and Table 2: Again, there are inconsistencies between the text and the table. In the first of these two sentence we are told that there are 15 respondents (the 43.7% based on 34 respondents per Table 2) that do not write reports for patients and/or dentists who reside in states that they are not licensed in. In the second of these two sentences and in Table 2 we are given the percent distribution of the specific reasons offered by these 15 respondents. However, simple calculations using the data in Table 2 indicate that there were 49 responses among 39 respondents (yes, respondents could choose more than one answer). But how can there be 39 respondents to the question in Table 2 when there were only 15 eligible respondents. It is important to note that responding to the questions in Table 2 was contingent upon responding “yes” to the last question shown in Table 1.
8. Results, paragraph 2, sentence 2; and Table 1, fifth entry: It is misleading to report that 80% of the respondents did not know whether their malpractice carrier excluded certain states because the percentage is based on only 15 respondents, not the 64 (or even 60) we were told previously constituted the sample for reporting the “remaining results.”
9. Results, paragraph 3, sentences 4 and 5: These sentence refer to Table 4, which includes the free text responses to the “other” option of question 6. It seems logical to report these results immediately following the summary of the data from question 6 (i.e., at the end of the first paragraph of the results section)
10. Discussion, paragraph 2, sentence 2: Refers to the inflated “80%” identified in comment 8 above.
11. Discussion, paragraph 3, sentence 1: There are no data whatsoever to support the second half of this sentence (…and who would not consider abandoning that practice even though their actions are potentially criminal in nature). The authors did not address this in their survey and can have no knowledge about what the respondents would or would not consider doing in the future.
12. Discussion, paragraph 4, sentence 2: The differences described here may not reflect confusion or misunderstanding at all. Indeed, there are great regional and individual differences among the state regulations, so without knowing where the two respondents practice and from which states they receive referrals it is not possible to discern whether their comments are due to confusion or misunderstanding or whether they accurately reflect the respondents experience based on the region they live in and draw from.
13. Table 1: In addition to the discrepancies mentioned previously, why are there only 34 responses in the sixth entry? What happened to the other 30 respondents?
14. Table 2: This table is unnecessary as all of the data are presented in the last sentence of the first paragraph of the results section.
15. Table 4: The first and last entries are the same.
16. Survey Questionnaire: The questionnaire was not well designed. (A) There were no screening questions to determine the suitability of the respondent (other than the first questions asking them to verify Diplomate status). Based on the comments in Tables 3 and 4, there were several respondents who live in Canada or outside of North America, several who limited their practice to intramural dental school patients, and several more who are insured or indemnified by their state. All of these folks should have been excluded. Indeed, two of the Canadians even suggested that their responses may not be pertinent to the study. (B) The survey intended to address the issue of images that are acquired out-of-state (relative to where the respondent resides) and were sent electronically across state borders for an interpretation. While this is implied it is never state explicitly. This is problematic for the primary question in this study: “Do you read images and provide reports for patients who reside in a state in which you are not licensed, or from dentists who practice in a state in which you are not licensed?” Consider the following: I am an oral and maxillofacial practice with an imaging center in New York City. In addition to local patients referred by local dentists, I regularly see patients who reside in New Jersey or Connecticut who are referred by dentists in their home state. I only interpret images that I acquire in my imaging center and do not interpret images sent to me electronically. In this situation, I would still respond “yes” to the primary question, but I am not the respondent who the researchers are really interested in. While I have offered a hypothetical scenario, the authors have no way of knowing how many of respondents this scenario might actually apply to. There are several “border” cities in addition to New York City: Philadelphia, Washington DC, Portland OR, St Louis MO, as well as others that are likely to draw from nearby neighboring states: Boston, Chicago, Minneapolis. I am sure that there are others cities that fit these categories that are not coming to mind at the moment. (C) The primary question is a compound or “double barreled” question that requires both parts to be negative in order to general a negative response. This is simply not good practice. Each question should address a single element so that differences can be identified. (D) The survey included two questions regarding malpractice premiums that appear virtually identical despite slightly different wording. Results from these questions was not reported.
Comments to the Author 326-15 A SURVEY ORAL AND MAXILLOFACIAL RADIOLOGISTS’ PRACTICE HABITS AND ATTITUDES TOWARD STATE-BASED LICENSING: A HARBINGER OF THINGS TO COME? GENERAL
This paper reports the results of a survey answered by 77 dentists, 70 of whom are also maxillofacial radiologist about they if they report images for patients and dentists in states n which the radiologist are not licensed.
There are *several* issues with the *methodology* section that prevent a proper evaluation, hence, the manuscript must detail the following issues before review in detail the results.
1. I suggest that the title of the paper is changed to better reflect the work e.g. What are the views of the OMR about….
2. It's not clear what is the problem to be studied: the attitudes of dentist about to read an image from a patient from other state or even country? How many are aware of the legal complications of the limits of their legal responsibility? Clarify in the paper.
3. Page 6, line 14: the information is not publicy available. See https://aaomr.site-ym.com/sear
4. Describe in detail how the survey was developed. Also describe whether the usability and technical functionality of the electronic questionnaire was tested. This is necessary to ensure that the data collected was valid. Also describe if was and adaptive questionnaire (certain items, or only conditionally displayed based on responses to other items) or normal. See Eysenbach, G., 2004. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J. Med. Internet Res. 6, e34. doi:10.2196/jmir.6.3.e34
5. Were the responses entered manually into a database? Or was there an automatic method for capturing responses? The dentists were contacted via email? Were any incentives offered?
6. State if the respondents were able to change to review and change their answers.
7. Since a web questionnaire was used: how the authors determined a unique visitor to a single questionnaire? IP address? Cookies?
8. Did the authors recollect information about how many dentists visited the survey page? Add this information and analyze this result, for example: “we sent 255 invitations… the webpage had XXX visitors and we obtained ZZZ completed questionnaires and YYY partially completed”
9. Describe if only completed questionnaires were analyzed and how was the handling of incomplete questionnaires.
Please, consider to follow the reccomendations from Burns et al. for the report of surveys: Burns, K.E.A., Duffett, M., Kho, M.E., Meade, M.O., Adhikari, N.K.J., Sinuff, T., Cook, D.J., ACCADEMY Group, 2008. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ 179, 245–252. doi:10.1503/cmaj.080372
Comments to the Author
The questionnaire asks questions concerning the radiologists understanding if their radiographic interpretation is a covered by other states and their malpractice carrier. From this the authors extrapolate that there was a need for national licensure, citing European and other national models.
(1) There is no research on what the actual laws are in sample states. The authors just cited what the survey reported. I would do your own research of state boards for the same questions to see if their response matches the interpretation of the oral/maxillofacial radiologist.
(2) I would research if other state laws even consider radiographic interpretation the “practice of dentistry.”
(2) Same for malpractice, no data or research directly asking the same questions of major malpractice carriers.
(3) Since oral radiologists are such a very small percentage of dentists, and their radiographic interpretation is diagnostic information, perhaps a discussion on limited licensure or “registration” in those states. Extrapolating this into a call for national licensure seems extreme.
(4) I would suggest bringing more of a comparison to our medical colleagues who have been doing this on a much larger scale for many more years, to see how this is handled in the medical community.
(5) Wouldn't the same issue be with oral pathologists? Most dentists send their pathology requests to out of state labs for interpretation. Isn't that very similar to radiologists doing interpretation of radiographs?
Reviewer(s)' Comments to Author:
Comments to the Author Relevance: The radiologic anatomy of the maxillary sinus is certainly an appropriate topic for JADA.
Originality: The topic of sinus septae is not particularly novel.
Title: The title is inaccurate as only sample projections saved in bitmap were used to represent CBCT, and then using solely one machine. The full 3D capabilities of OnDemand were not employed and unavailable to the solitary “dental radiologist” - presumed a specialist in oral and maxillofacial radiology?
Abstract: The abstract does represent what was done and follows the manuscript in failing to use terminology precisely and providing invalid conclusions.
Keywords: The keywords do not follow standard MESH terminology
Introduction: The review of the literature failed to provide evidence of a dearth of information or controversy regarding the use of diagnostic imaging for sinus septum interpretation, failed to indicate anatomical knowledge in the presence and positioning of maxillary astral septa, and failed to generate a specific hypothesis upon which to build the methods to employ.
Methods: Patients become subjects once recruited into a scientific study. Bilateral sinuses cannot be considered independent of one another. !0 mm is a relatively narrow curved slab to use for panoramic reconstruction from CBCT for detection of sinus septae. Are “cross-sectional” images “transaxial slices?” How thick was the axial slice and how was it determined which height level to use? Different levels would result in different results for septa detection and any one axial slice is certainly not a good ground truth. Further, in to my knowledge, surgeons do view and use axial slice images from CT, CBCT, MRI, etc. Why was the full power of OnDemand not used to provide three dimensional examination of the sinuses for septal numbers and positions. Why was but one observer used?
Results: The results are invalid as the gold standard was a single axial slice representative only of that level in the sinus.
Conclusions: These are overstated as the study design did not permit use of the full power of CBCT and OnDemand software. Further the ground truth applied is simply invalid.
Comments to the Author
Review Comparison between panoramic radiography and cone beam computed tomography in the evaluation of maxillary sinus septa
This manuscript presents the results of a diagnostic accuracy (diagnostic evidence level 2 according to Fryback and Thornbury) research aimed to find the usefulness of cone-beam computed tomography panoramic reconstruction for detecting the presence, location and orientation of maxillary sinus septa.
Although this is an interesting topic and representative of a widely used intervention in dentistry, I suggest that this paper be rejected for publication on the grounds that it provides no new or relevant information and because considerable methodological weaknesses exist, preventing a proper evaluation of the results presented.
Some of the material is worthy of interest for dentist specializing in Implantology or Radiology, but the needed revisions are so extensive that any resubmission should be considered a new paper.
I suggest to address also the issues related in the recent systematic review published by Vogiatzi et al. Entitled Incidence of anatomical variations and disease of the maxillary sinuses as identified by cone beam computed tomography: a systematic review. Int J Oral Maxillofac Implants. 2014 Nov-Dec;29(6):1301-14. doi: 10.11607/jomi.3644. .
Material and methods
Report if data collection was planned before the index test and reference standard were performed (prospective study) or after (retrospective study)?
Report the assumptions for the choosen sample size.
Report both the slice thicknesses and slice spacing for the CBCT examination.
Clarify if all the sinus were examined or only the floor related to the implant assessment area.
Describe the calibration process to assess the images and the results of the calibration
About the pilot study, clarify if this 10% of the sample was reassessed o not was used
Describe in detail the reference standard and its rationale.
Describe definition of and rationale for the units, cut-offs and/or categories of the results of the index tests and the reference standard.
Describe whether or not the readers of the index tests and reference standard were blind (masked) to the results of the other test and describe any other clinical information available to the readers.
Describe how the data was tabulated: one per reading, per patient, per area?
Describe and the unit of analysis.
Since the typescript propose to analyze the data with a reliability analysis (kappa test), this research seems more likely to be a reliability research instead a diagnostic accuracy research. In such case, my suggesstion is to follow the Guidelines for Reporting Reliability and Agreement Studies (GRRAS). See J Clin Epidemiol. 2011 Jan;64(1):96-106. doi: 10.1016/j.jclinepi.2010.03.002
A study aimed to give information about the 2d evidence level should report the sensitivity, specificity, predictive values and preferably a ROC curve. None of these results are reported in the present manuscript.
## Reviewer comments and how to respond
Reviewer: The English is not good enough for publication Author: Some of the reviewer’s comments were so badly written, how can he be a good judge of English!
It’s true that many reviewers do not have English as their first language. Perhaps they found your English was difficult to understand, or perhaps they were afraid that other reviewers would point this out and felt embarrassed if they did not say anything about the English language. Some non-English speakers feel it would save face if they criticised the language. Author: You asked a colleague from the US to read your paper and he said it was fine!
Perhaps the English in your paper was OK, but just not good enough for publication. Perhaps your colleague was trying to be nice, and did not want to say he thought the English was not good enough. Perhaps your colleague didn’t want to get stuck with the job of re-writing your paper. Perhaps he just didn’t understand your paper and didn’t want to admit it. Author: You think the reviewer is judging your English more harshly because it’s from China.
Most journals operate a “blind” peer review, i.e. anything identifying the author is removed before sending for review. However, there are often clues within the paper, such as mentions of previous publications, or of Chinese research. Also any native English speaker will pick up on English that just does not sound natural to him, even if technically grammatical – do you know any Americans who can write flawless Chinese? What you should do Do you know anyone with excellent English who you could ask to advise you on the English in your paper? Ideally, this should be someone who understands the subject matter of your paper, or at least is familiar with the demands of scientific publication. Use the services of a company like ISE, with a proven track record of English language editing for scientific papers. For further information, contact us here at How to respond: We regret there were problems with the English. The paper has been carefully revised by [a native English speaker]/[a professional language editing service] to improve the grammar and readability.
Reviewer: There’s nothing new in this paper. Author: Of course there is! Didn’t he read it properly?
Perhaps this got lost in the detail of the paper when you wrote it. Many authors get carried away writing their methods and results, with a wealth of data, but forget to add a strong conclusion. Perhaps you did not want to boast too much about your findings, or thought that it was obvious that the results showed a major step forward. Perhaps you did not discuss other research in the area, either because you assumed the reviewer would be familiar with it, or you did not want to draw attention to similar work. Maybe the reviewer thought you were not aware of this research. Perhaps there really isn’t anything new in your research. In that case, it is unlikely to get published. But surely there is something new to report: a larger study group, a different method or study population? What you should do Remember that many editors say that the number one reason for rejecting a paper is because it does not present something new. Although most research builds on work done before, no one wants to waste time reading about research that doesn’t show anything new. Therefore you must make it clear what is new about your research findings. Read through your paper carefully to identify areas where you could clarify what is new. Pay particular attention to the abstract, discussion and conclusion (or concluding sentences) Add more details about the implications of your research If necessary, add some more detail about other research in this subject area, and how your research differs from this. Discuss conflicting research. How to respond: Thank you for this valuable feedback. Our research [is the first to show that…]/[confirms the findings of White et al. in a younger age group…]/[improves the yield of…]. We have added a sentence to the Abstract (page 2 line 5, and paragraph to the Discussion section (page 15 starting line 8), to clarify this.
Reviewer: Points out an error in your paper Author: Oh, no! I am so embarrassed, I will just withdraw my paper!
Everybody makes mistakes, so do not be disheartened. The review process should help you to improve your paper. The review process is usually “blind”, so the reviewer will not know author names or affiliations. What you should do If you can fix the problem with your paper, then do so. If this requires more experimental research, ask the Editor before proceeding, and indicate the likely time frame. If you can’t fix the problem, can you save anything from your research that is worth publishing? How to respond: We are extremely grateful to Reviewer X for pointing out this problem. We have [recalculated the statistics]/[revised Table 1]/[re-examined the original scans] and adjusted the text where highlighted.
Reviewer: Points out an error in your paper, but you disagree Author: This reviewer is an idiot. Doesn’t he know anything about this subject area?
Not every reviewer is an expert in the exact field he’s asked to review. It is often hard for a journal to find enough reviewers for a paper. Or perhaps the Editor-in-Chief is not familiar with this area, and assigned the paper to a reviewer from a different field. Nevertheless, the reviewer gave his opinion, and you have to respond to it. Author: I think this reviewer is biased!
The review process is usually “blind”, so the reviewer does not know who the author is. Perhaps you think the reviewer guessed you were non-English speaking, or even from China, and was prejudiced because of that. Perhaps you think the author is biased against certain view points, or research fields. Like all humans, even reviewers have likes and dislikes, they may be unaware of their own prejudices. As above, the reviewer gave his opinion, and you have to respond to it. What you should do Keep to the facts. Stay polite, but keep emotion out of it. If the reviewers comment is not well founded in fact, it should be quite easy to give a successful response. If you think the paper does not require a change, give a brief explanation with supporting references or data. Perhaps a small change to your paper might clarify the point. Any indication that the reviewer misunderstood your paper suggests you may need to make some changes. If your paper was rejected because of the review, you have to opportunity to appeal the decision. But remember that it is the Editor-in-Chief who makes the decision to reject. Only appeal if you really think the review misjudged your paper. You may submit your paper to another journal after rejection. But remember that there are a limited number of reviewers in any field of study. Your paper may be assigned to the same reviewer by a different journal, and he will not be impressed if he sees that his reviewer comments have been ignored. How to respond: Here’s an example where the author felt it was not necessary to make any change, and has tactfully suggested to the Editor that the paper is aligned with other published research in this field.
The reviewer has commented that we have used the wrong method to test for ABC. Although we agree with the reviewer that method X was the accepted method in the past, since method Y was introduced by White et al. (J Sci Method 1999:35;1-10) this has become the standard, and so is now mentioned in research reports without further justification (as in the references in cited in our paper). We have already included a citation to the original paper by White et al. If you require further discussion of this method, we will be happy to add a supporting paragraph to the paper.