top of page

Group

Public·11 members

Survey Remover V4.7



We assessed content validity using a modified Delphi process with science (n=18) and practice-based (n=16) experts. Next, we refined the survey based on input from science- and practice-based experts, cognitive response testing, and item analysis of extant survey data. Field testing of the refined survey involved community stakeholders in Greenville County, South Carolina (n=50), East Boston, Massachusetts (n=30), and Tucson, Arizona (n=84) between 2019 and 2020. Construct validity was assessed with confirmatory factor analysis (CFA). Two-week test-retest reliability was assessed among a subsample of 14 paired respondents in South Carolina.




Survey remover v4.7


DOWNLOAD: https://www.google.com/url?q=https%3A%2F%2Ftinourl.com%2F2uctLr&sa=D&sntz=1&usg=AOvVaw3AyhgvkP-Mw580bLiN3bTc



For both expert groups, we recruited participants by email and/or telephone starting approximately 1.5 months in advance of the pre-determined session dates to avoid scheduling conflicts. Upon recruitment, participants in both groups were provided a description of the Stakeholder-driven Community Diffusion theory and referred to our prior survey development publication [5] for additional information. Participants were offered a gift card for their participation and were sent a summary report following completion of data collection.


For most questions, the proprietary platform allowed participants to compare their responses to that of the group by displaying individual responses on the screen anonymously in real time (apart from sociodemographic questions that could compromise anonymity). We used this approach, rather than a traditional Delphi method [13], to reduce participant burden and avoid attrition across multiple rounds of surveys.


We refined the beta prototype (v2) survey with information from three sources: (i) survey cognitive response testing, (ii) previously collected beta prototype knowledge and engagement survey data, and (iii) findings from the content validity assessment. Through this refinement process, we also had an eye toward decreasing participant burden by abbreviating the knowledge and engagement scales.


Survey refinement was also informed by knowledge and engagement data collected prospectively with the beta prototype (v2) survey [5] in Somerville, Massachusetts [6] and Cuyahoga County, Ohio, between 2015 and 2019. Manuscripts describing knowledge and engagement results in these communities are forthcoming. We examined meanSD responses of the 18 knowledge items and 25 engagement items from a total of 300 observations in Somerville and 239 observations in Cuyahoga County. Observations were stratified by community and measurement period. We reviewed items using the following criteria derived from extant measurement research [15,16,17,18,19]:


Using findings from the content validity assessment, we considered the addition of new measurement constructs (distinct from knowledge and engagement), new domains within knowledge and engagement, and new survey items that fit within existing constructs (when possible, adapted from existing measures) for the refined release candidate (v3) survey. We prioritized concepts salient to both science- and practice-based experts. We also considered concepts salient to one expert group if it was reported by multiple participants and/or rated highly. All considered additions were reviewed by the research team and corroborated with the literature on childhood obesity prevention, community engagement, and related fields.


Additional File 1 includes annotated knowledge and engagement measures with documented changes to the beta prototype (v2) survey during the refinement process, resulting in the release candidate (v3) used in the field test (described below).


In our prior work, knowledge and engagement concepts were conceived, vetted, and pilot-tested by an international team of investigators with deep expertise in whole-of-community childhood obesity prevention interventions [5]. However, through the content validity assessments in the current study, we took an important next step in incorporating a broader range of expertise from both researcher and practitioner perspectives. Participants in both expert groups gave strong ratings of the existing knowledge and engagement domains, contributing to our confidence that the scales reflect important stakeholder characteristics that might help catalyze community change. Yet beyond this confirmatory finding, the assessments yielded key additions to our knowledge and engagement measures that span concepts like systems approaches, social determinants of health, inclusivity, shared decision-making power, and shared community ownership. With these changes, we believe that the revised survey instrument will have greater resonance with its users and reflect critical, up-to-date focal points of what might be required to address childhood obesity at the community-level.


The modified knowledge and engagement scales demonstrated strong reliability characteristics, with multiple indicators exceeding those reported from our prior testing with the beta prototype (v2) survey [5]. Internal scale consistency improved for all knowledge subscales (e.g., for intervention factors, from α=0.58 previously to α=0.87 in our current study). Engagement internal scale consistency was high in both studies; however, we observed stronger 2-week test-retest agreement for each engagement subscale score (e.g., for influence & power, from ICC=0.55 and 13% within-subject variation to ICC=0.95 and 7% within-subject variation).


About

Welcome to the group! You can connect with other members, ge...
Group Page: Groups_SingleGroup
bottom of page