-
1.
Updates on Age to Start and Stop Colorectal Cancer Screening: Recommendations From the U.S. Multi-Society Task Force on Colorectal Cancer.
Patel, SG, May, FP, Anderson, JC, Burke, CA, Dominitz, JA, Gross, SA, Jacobson, BC, Shaukat, A, Robertson, DJ
The American journal of gastroenterology. 2022;(1):57-69
Abstract
This document is a focused update to the 2017 colorectal cancer (CRC) screening recommendations from the U.S. Multi-Society Task Force on Colorectal Cancer, which represents the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy. This update is restricted to addressing the age to start and stop CRC screening in average-risk individuals and the recommended screening modalities. Although there is no literature demonstrating that CRC screening in individuals under age 50 improves health outcomes such as CRC incidence or CRC-related mortality, sufficient data support the U.S. Multi-Society Task Force to suggest average-risk CRC screening begin at age 45. This recommendation is based on the increasing disease burden among individuals under age 50, emerging data that the prevalence of advanced colorectal neoplasia in individuals ages 45 to 49 approaches rates in individuals 50 to 59, and modeling studies that demonstrate the benefits of screening outweigh the potential harms and costs. For individuals ages 76 to 85, the decision to start or continue screening should be individualized and based on prior screening history, life expectancy, CRC risk, and personal preference. Screening is not recommended after age 85.
-
2.
Different combinations of the GLIM criteria for patients awaiting a liver transplant: Poor performance for malnutrition diagnosis but a potentially useful prognostic tool.
Santos, BC, Fonseca, ALF, Ferreira, LG, Ribeiro, HS, Correia, MITD, Lima, AS, Penna, FGCE, Anastácio, LR
Clinical nutrition (Edinburgh, Scotland). 2022;(1):97-104
Abstract
BACKGROUND & AIMS Studies using the Global Leadership Initiative on Malnutrition (GLIM) criteria for patients with liver cirrhosis are limited. This study aimed to assess the impact of malnutrition according to the GLIM criteria on the outcomes of patients awaiting a liver transplant (LTx) and compare these criteria with Subjective Global Assessment (SGA). METHODS This retrospective observational study included adult patients awaiting LTx. Patient clinical data, nutritional status according to various tools including SGA, and resting energy expenditure were assessed. The distinct phenotypic and etiologic criteria provided 36 different GLIM combinations. The GLIM criteria and SGA were compared using the kappa coefficient. The variables associated with mortality before and after the LTx and with a longer length of stay (LOS) after LTx (≥18 days) were assessed by Cox regression and logistic regression analyses, respectively. RESULTS A total of 152 patients were included [median age 52.0 (interquartile range: 46.5-59.5) years; 66.4% men; 63.2% malnourished according to SGA]. The prevalence of malnutrition according to the GLIM criteria ranged from 0.7% to 30.9%. The majority of the GLIM combinations exhibited poor agreement with SGA. Independent predictors of mortality before and after LTx were presence of ascites or edema (p = 0.011; HR:2.58; CI95%:1.24-5.36), GLIM 32 (PA-phase angle + MELD) (p = 0.026; HR:2.08; CI95%:1.09-3.97), GLIM 33 (PA + MELD-Na≥12) (p = 0.018; HR:2.17; CI95%:1.14-4.13), and GLIM 34 (PA + Child-Pugh) (p = 0.043; HR:1.96; CI95%:1.02-3.77). Malnutrition according to GLIM 28 (handgrip strength + Child-Pugh) was independently associated with a longer LOS (p = 0.029; OR:7.21; CI95%:1.22-42.50). CONCLUSION The majority of GLIM combinations had poor agreement with SGA, and 4 of the 36 GLIM combinations were independently associated with adverse outcomes.
-
3.
Managing folate deficiency implies filling the gap between laboratory and clinical assessment.
Ferraro, S, Biganzoli, G, Gringeri, M, Radice, S, Rizzuto, AS, Carnovale, C, Biganzoli, EM, Clementi, E
Clinical nutrition (Edinburgh, Scotland). 2022;(2):374-383
Abstract
The characterization of folate status in subjects at risk of deficiency and with altered vitamin homeostasis is crucial to endorse preventive intervention health policies, especially in developed countries. Several physiological changes (i.e. pregnancy), clinical situations and diseases have been associated to increased requirement, impaired intake and absorption of folate. However clinical practice guidelines (CPG) endorse folic acid supplementation generally discarding the use of its determination in serum to assess the risk of deficiency and/or its concentration at baseline. Poor confidence on the diagnostic accuracy of serum folate assays still persists in the current CPGs although recent standardization efforts have greatly improved inter-method variability and precision. In this review we critically appraise the methodological issues concerning laboratory folate determination and the evidence on the potential adverse effects of folic acid exposure. The final aim is to build a sound background to promote serum folate-based cost-effective health care policies by optimizing folic acid supplementation in subjects at risk of deficiency and with altered folate homeostasis. Our first result was to adjust in relation to current serum folate assays the thresholds reported by CPGs as index of folate status, defined on the association with metabolic and hematologic indicators. We identify a statistically significant difference between the estimated thresholds and accordingly show that the assessment of folate status actually changes in relation to the assay employed. The use of the method-dependent thresholds here reported may pragmatically endorse the stewardship of folic acid supplementation in clinical practice and increase the cost-effectiveness of health care policies.
-
4.
A prediction rule for severe adverse events in all inpatients with community-acquired pneumonia: a multicenter observational study.
Sakakibara, T, Shindo, Y, Kobayashi, D, Sano, M, Okumura, J, Murakami, Y, Takahashi, K, Matsui, S, Yagi, T, Saka, H, et al
BMC pulmonary medicine. 2022;(1):34
Abstract
BACKGROUND Prediction of inpatients with community-acquired pneumonia (CAP) at high risk for severe adverse events (SAEs) requiring higher-intensity treatment is critical. However, evidence regarding prediction rules applicable to all patients with CAP including those with healthcare-associated pneumonia (HCAP) is limited. The objective of this study is to develop and validate a new prediction system for SAEs in inpatients with CAP. METHODS Logistic regression analysis was performed in 1334 inpatients of a prospective multicenter study to develop a multivariate model predicting SAEs (death, requirement of mechanical ventilation, and vasopressor support within 30 days after diagnosis). The developed ALL-COP-SCORE rule based on the multivariate model was validated in 643 inpatients in another prospective multicenter study. RESULTS The ALL-COP SCORE rule included albumin (< 2 g/dL, 2 points; 2-3 g/dL, 1 point), white blood cell (< 4000 cells/μL, 3 points), chronic lung disease (1 point), confusion (2 points), PaO2/FIO2 ratio (< 200 mmHg, 3 points; 200-300 mmHg, 1 point), potassium (≥ 5.0 mEq/L, 2 points), arterial pH (< 7.35, 2 points), systolic blood pressure (< 90 mmHg, 2 points), PaCO2 (> 45 mmHg, 2 points), HCO3- (< 20 mmol/L, 1 point), respiratory rate (≥ 30 breaths/min, 1 point), pleural effusion (1 point), and extent of chest radiographical infiltration in unilateral lung (> 2/3, 2 points; 1/2-2/3, 1 point). Patients with 4-5, 6-7, and ≥ 8 points had 17%, 35%, and 52% increase in the probability of SAEs, respectively, whereas the probability of SAEs was 3% in patients with ≤ 3 points. The ALL-COP SCORE rule exhibited a higher area under the receiver operating characteristic curve (0.85) compared with the other predictive models, and an ALL-COP SCORE threshold of ≥ 4 points exhibited 92% sensitivity and 60% specificity. CONCLUSIONS ALL-COP SCORE rule can be useful to predict SAEs and aid in decision-making on treatment intensity for all inpatients with CAP including those with HCAP. Higher-intensity treatment should be considered in patients with CAP and an ALL-COP SCORE threshold of ≥ 4 points. TRIAL REGISTRATION This study was registered with the University Medical Information Network in Japan, registration numbers UMIN000003306 and UMIN000009837.
-
5.
Hiding unhealthy heart outcomes in a low-fat diet trial: the Women's Health Initiative Randomized Controlled Dietary Modification Trial finds that postmenopausal women with established coronary heart disease were at increased risk of an adverse outcome if they consumed a low-fat 'heart-healthy' diet.
Noakes, TD
Open heart. 2021;(2)
Abstract
The Women's Health Initiative Randomized Controlled Dietary Modification Trial (WHIRCDMT) was designed to test whether the US Department of Agriculture's 1977 Dietary Guidelines for Americans protects against coronary heart disease (CHD) and other chronic diseases. The only significant finding in the original 2006 WHIRCDMT publication was that postmenopausal women with CHD randomised to a low-fat 'heart-healthy' diet in 1993 were at 26% greater risk of developing additional CHD events compared with women with CHD eating the control diet. A 2017 WHIRCDMT publication includes data for an additional 5 years of follow-up. It finds that CHD risk in this subgroup of postmenopausal women had increased further to 47%-61%. The authors present three post-hoc rationalisations to explain why this finding is 'inadmissible': (1) only women in this subgroup were less likely to adhere to the prescribed dietary intervention; (2) their failure to follow the intervention diet increased their CHD risk; and (3) only these women were more likely to not have received cholesterol-lowering drugs. These rationalisations appear spurious. Rather these findings are better explained as a direct consequence of postmenopausal women with features of insulin resistance (IR) eating a low-fat high-carbohydrate diet for 13 years. All the worst clinical features of IR, including type 2 diabetes mellitus (T2DM) in some, can be 'reversed' by the prescription of a high-fat low-carbohydrate diet. The Women's Health Study has recently reported that T2DM (10.71-fold increased risk) and other markers of IR including metabolic syndrome (6.09-fold increased risk) were the most powerful predictors of future CHD development in women; blood low-density lipoprotein-cholesterol concentration was a poor predictor (1.38-fold increased risk). These studies challenge the prescription of the low-fat high-carbohydrate heart-healthy diet, at least in postmenopausal women with IR, especially T2DM. According to the medical principle of 'first do no harm', this practice is now shown to be not evidence-based, making it scientifically unjustifiable, perhaps unethical.
-
6.
Risk Factors and Lifestyles in the Development of Atrial Fibrillation Among Individuals Aged 20-39 Years.
Itoh, H, Kaneko, H, Fujiu, K, Kiriyama, H, Morita, K, Kamon, T, Michihata, N, Jo, T, Takeda, N, Morita, H, et al
The American journal of cardiology. 2021;:40-44
Abstract
Epidemiological evidence on the relationship of modifiable risk factors and lifestyles with incident atrial fibrillation (AF) in young adults remains insufficient. We aimed to identify the determinants of AF among young adults using a nationwide epidemiological database. Medical records of 286,876 individuals (20-39 years) without prior history of cardiovascular disease were extracted from the JMDC Claims Database. We analyzed the association of modifiable risk factors with the incidence of AF. The median (interquartile range) age was 34 (29-37) years, and 54.4% were men. After a mean follow-up of 1,017 ± 836 days, 267 individuals (0.1%) developed AF. Multivariable Cox regression analysis demonstrated that high waist circumference, hypertension, cigarette smoking, and poor sleep quality as well as age and sex were associated with increased incidence of AF. Kaplan-Meier curves showed that number of modifiable components including high waist circumference, hypertension, cigarette smoking, and poor sleep quality clearly stratified the risk of AF development (Log rank test, p < 0.001). Age- and sex-adjusted Cox regression analyses showed individuals with one (hazard ratio [HR] 1.56, 95% confidence interval [CI] 1.13-2.18), two (HR 2.03, 95% CI 1.40-2.95), three (HR 3.48, 95% CI 2.19-5.54), and four (HR 10.78, 95% CI 5.26-22.11) components were associated with an increased incidence of AF compared with individuals with no components. In conclusion, high waist circumference, hypertension, cigarette smoking, and poor sleep quality were associated with the development of AF among young adults, suggesting the importance of maintaining these modifiable factors for the primordial prevention of AF in young adults.
-
7.
Clinical risk predictors in atrial fibrillation patients following successful coronary stenting: ENTRUST-AF PCI sub-analysis.
Goette, A, Eckardt, L, Valgimigli, M, Lewalter, T, Laeis, P, Reimitz, PE, Smolnik, R, Zierhut, W, Tijssen, JG, Vranckx, P
Clinical research in cardiology : official journal of the German Cardiac Society. 2021;(6):831-840
-
-
Free full text
-
Abstract
AIMS: This subgroup analysis of the ENTRUST-AF PCI trial (ClinicalTrials.gov Identifier: NCT02866175; Date of registration: August 2016) evaluated type of AF, and CHA2DS2-VASc score parameters as predictors for clinical outcome. METHODS Patients were randomly assigned after percutaneous coronary intervention (PCI) to either edoxaban (60 mg/30 mg once daily [OD]; n = 751) plus a P2Y12 inhibitor for 12 months or a vitamin K antagonist [VKA] (n = 755) plus a P2Y12 inhibitor and aspirin (100 mg OD, for 1-12 months). The primary outcome was a composite of major/clinically relevant non-major bleeding (CRNM) within 12 months. The composite efficacy endpoint consisted of cardiovascular death, stroke, systemic embolic events, myocardial infarction (MI), and definite stent thrombosis. RESULTS Major/CRNM bleeding event rates were 20.7%/year and 25.6%/year with edoxaban and warfarin, respectively (HR [95% CI]: 0.83 [0.654-1.047]). The event rates of composite outcome were 7.26%/year and 6.86%/year, respectively (HR [95% CI]): 1.06 [0.711-1.587]), and of overall net clinical benefit were 12.48%/year and 12.80%/year, respectively (HR [(95% CI]: 0.99 [(0.730; 1.343]). Increasing CHA2DS2-VASc score was associated with increased rates of all outcomes. CHA2DS2-VASc score ≥ 5 was a marker for stent thrombosis. Paroxysmal AF was associated with a higher occurrence of MI (4.87% versus 2.01%, p = 0.0024). CONCLUSION After PCI in AF patients, increasing CHA2DS2-VASc score was associated with increased bleeding rates and CHA2DS2-VASc score (≥ 5) predicted the occurrence of stent thrombosis. Paroxysmal AF was associated with MI. These findings may have important clinical implications in AF patients.
-
8.
The Association Between Asthma and Risk of Myasthenia Gravis: A Systematic Review and Meta-analysis.
Yingchoncharoen, P, Charoenngam, N, Ponvilawan, B, Thongpiya, J, Chaikijurajai, T, Ungprasert, P
Lung. 2021;(3):273-280
Abstract
PURPOSE This study aimed to investigate the association between asthma and risk of myasthenia gravis (MG) using the method of systematic review and meta-analysis. METHODS Potentially eligible studies were identified from Medline and EMBASE databases from inception to July 2020 using search strategy that comprised terms for "Asthma" and "Myasthenia Gravis". Eligible cohort study must consist of one cohort of individuals with asthma and another cohort of individuals without asthma. Then, the study must report relative risk (RR) with 95% confidence intervals (95% CIs) of incident MG between the groups. Eligible case-control studies must include cases with MG and controls without MG. Then, the study must explore their history of asthma. Odds ratio (OR) with 95% CIs of the association between asthma status and MG must be reported. Point estimates with standard errors were retrieved from each study and were combined together using the generic inverse variance method. RESULTS A total of 6,835 articles were identified. After two rounds of independent review by five investigators, two cohort studies and three case-control studies met the eligibility criteria and were included into the meta-analysis. Pooled analysis showed that asthma was significantly associated with risk of MG with the pooled risk ratio of 1.38 (95% CI 1.02-1.86). Funnel plot was symmetric, which was not suggestive of publication bias. CONCLUSION The current study found a significant association between asthma and increased risk of MG.
-
9.
Non-invasive stratification of hepatocellular carcinoma risk in non-alcoholic fatty liver using polygenic risk scores.
Bianco, C, Jamialahmadi, O, Pelusi, S, Baselli, G, Dongiovanni, P, Zanoni, I, Santoro, L, Maier, S, Liguori, A, Meroni, M, et al
Journal of hepatology. 2021;(4):775-782
Abstract
BACKGROUND & AIMS Hepatocellular carcinoma (HCC) risk stratification in individuals with dysmetabolism is a major unmet need. Genetic predisposition contributes to non-alcoholic fatty liver disease (NAFLD). We aimed to exploit robust polygenic risk scores (PRS) that can be evaluated in the clinic to gain insight into the causal relationship between NAFLD and HCC, and to improve HCC risk stratification. METHODS We examined at-risk individuals (NAFLD cohort, n = 2,566; 226 with HCC; and a replication cohort of 427 German patients with NAFLD) and the general population (UK Biobank [UKBB] cohort, n = 364,048; 202 with HCC). Variants in PNPLA3-TM6SF2-GCKR-MBOAT7 were combined in a hepatic fat PRS (PRS-HFC), and then adjusted for HSD17B13 (PRS-5). RESULTS In the NAFLD cohort, the adjusted impact of genetic risk variants on HCC was proportional to the predisposition to fatty liver (p = 0.002) with some heterogeneity in the effect. PRS predicted HCC more robustly than single variants (p <10-13). The association between PRS and HCC was mainly mediated through severe fibrosis, but was independent of fibrosis in clinically relevant subgroups, and was also observed in those without severe fibrosis (p <0.05). In the UKBB cohort, PRS predicted HCC independently of classical risk factors and cirrhosis (p <10-7). In the NAFLD cohort, we identified high PRS cut-offs (≥0.532/0.495 for PRS-HFC/PRS-5) that in the UKBB cohort detected HCC with ~90% specificity but limited sensitivity; PRS predicted HCC both in individuals with (p <10-5) and without cirrhosis (p <0.05). CONCLUSIONS Our results are consistent with a causal relationship between hepatic fat and HCC. PRS improved the accuracy of HCC detection and may help stratify HCC risk in individuals with dysmetabolism, including those without severe liver fibrosis. Further studies are needed to validate our findings. LAY SUMMARY By analyzing variations in genes that contribute to fatty liver disease, we developed two risk scores to help predict liver cancer in individuals with obesity-related metabolic complications. These risk scores can be easily tested in the clinic. We showed that the risk scores helped to identify the risk of liver cancer both in high-risk individuals and in the general population.
-
10.
Ten-year incidence and assessment of safe screening intervals for diabetic retinopathy: the OPHDIAT study.
Chamard, C, Daien, V, Erginay, A, Gautier, JF, Villain, M, Tadayoni, R, Carriere, I, Massin, P
The British journal of ophthalmology. 2021;(3):432-439
Abstract
BACKGROUND To estimate the 10-year incidence of referable diabetic retinopathy (DR) in a French population with type 1 and 2 diabetes mellitus (DM). A secondary objective was the assessment of safe screening intervals in patients with diabetes without retinopathy. METHODS Observational, prospective and multicentric study between June 2004 and September 2017 based on a regional screening programme for DR in the Paris region. The incidence of referable DR in patients without retinopathy at baseline was calculated by the Turnbull survival estimator. A safe screening interval was defined as a 95% probability of remaining without referable DR. RESULTS Among the 25 745 participants with type 1 (n=6086) or type 2 (n=19 659) DM, the 10-year cumulative incidence of referable DR was 19.10% (95% CI 17.21% to 21.14%) and 17.03% (15.78% to 18.35%), median (IQR) follow-up=3.33 (4.24) years. The safe screening interval for patients without DR at the first examination for type 1 and 2 DM was 2.2 (95% CI 2.0 to 2.4) and 3.0 (2.9 to 3.1) years, respectively. In a subgroup of low-risk patients with type 2 DM, the safe screening interval was 4.2 (3.8 to 4.6) years. CONCLUSIONS These data suggest that in Paris area, a 2-year, 3-year and 4-year screening interval was considered safe for type 1 DM, type 2 DM and for low-risk patients with type 2 DM, respectively, without DR at the first examination. While these data might be used to support the consideration of extending screening intervals, a randomised clinical trial would be suitable to confirm the safety for patients with DM.