Building Data Literacy Skills in Army Civilian Professionals through Deliberate Workflow Changes

A Faculty Case Study

By Dawn Bikowski, PhD, Albert Stegall, Erin O’Reilly, PhD

Article published on: July 1st, 2025 in the Army Civilian Professional Journal Issue 1

Read Time: < 12 mins

Emblem of the Defense Language Institute featuring a torch above a shield with a laurel branch, a helmet, and ancient script, along with a ribbon reading 'Defense Language Institute'.

Defense Language Institute Seal (Image courtesy of DLIFLC)

 

The context of this case study is the Defense Language Institute Foreign Language Center (DLIFLC), a U.S. Army institution of higher education. The mission of DLIFLC is to provide high-quality, culturally based foreign language education, training, evaluation, and degrees for the Department of Defense on a global scale, guided by the values of commitment, adaptability, integrity, and respect.1 The mission and values tie in with the institute’s vision, which is to generate and sustain warrior linguists throughout their military journey from apprentice to master. DLIFLC is heavily mission-driven and dedicated to the joint service member student body, employees, and stakeholders. DLIFLC employs a collaborative workforce with civilian staff serving in faculty or GS positions alongside active members of the U.S. Army, Air Force, Navy, Marines, and Coast Guard. DLIFLC is accredited to offer Associate of Arts and Bachelor of Arts degrees to qualifying graduates and takes a proactive approach to driving innovation and change. A foundational process to this proactive approach is the cyclical program review.

Standardized, recurring program reviews allow colleges or universities to “reflect, self-assess, and plan,” facilitating communication between units and top leadership.2 A crucial component of program reviews is long-term self-examination and dialogue built on common datasets. While DLIFLC administrators have engaged in program reviews for over two decades, an internal review in 2017 revealed that DLIFLC lacked consistent data collection procedures and that civilians often lacked the data literacy skills needed to conduct robust analysis and evaluation needed to fully benefit from the program review process. This article outlines how the Defense Language Institute Foreign Language Center purposefully revised the longstanding program review process with stakeholder input and structured staff support to develop data literacy skills with mid-level managers across the civilian corps. The ultimate goal is for leaders to identify key problem areas and create feasible improvement plans to meet the mission.

Program Reviews at DLIFLC Implementation and Purpose

Program reviews occur every three years for each of DLIFLC’s language programs (e.g., French, Spanish, Arabic). The reviews consist of two components: a presentation with accompanying briefing deck and a final written report. Each language program’s dean populates a standardized briefing template in consultation with their administrative teams, but the program administrators are free to determine content curation and briefing focus. The audience is the program faculty and administrators within the program itself as well as senior civilian and military academic leadership. In all cases, the program’s civilian and military leadership collaborate to compile and analyze key data, following guidelines provided by the Quality Assurance Office (QAO). Program review data covers topics such as staffing levels, curriculum challenges and successes, and, crucially, evidence of student learning. This evidence includes aggregated outcomes data on final exit exams (overall and by language skill), attrition and graduation rates, and student course evaluation feedback (numerical and comment-based). Data are displayed in aggregate for the three-year period covered in the review and by individual year; program data are also compared against other language programs and the overall average for DLIFLC. These data sets follow guidance by Kuh and Ikenberry et al.3

A crucial component of program reviews is long-term self-examination and dialogue built on common datasets.

The purpose of program reviews is threefold: (1) to foster excellence within the program, identifying ways to increase quality and provide critical guidance for administrative decisions guiding the accomplishment of the mission; (2) to promote dialogue, critical reflection, self-assessment, and strategic planning among faculty and leadership; and (3) to comply with accreditation requirements for evidence of the institute’s commitment to quality assurance and ongoing improvement, for both the civilian and Army accreditation bodies.4 The QAO manages the review process working in close collaboration with the program under review and the Office of Standardization and Academic Excellence (OSAE). Oversight and guidance are crucial to this process, since evidence of student learning is only consequential if it is utilized to “catalyze productive change” and “stimulate improvement.”5

Prior to the new project implementation outlined in this case study, the institute faced many challenges with program reviews, leading to reviews that were inconsistent across the enterprise and that resulted in unclear guidance for future program improvement actions. Some of the challenges stemmed from the need for integrated technological tools that could provide key data in real time. Other challenges resulted from the civilian corps faculty’s struggles with data use, including how to determine salient points and how to connect key data to their teaching and program priorities. This lack of capacity for data analysis can lead to data misuse and poor decisions.6

Increased Modernization and Focus on Data Literacy

In mid-2022, DLIFLC’s Directorate of Academic Affairs (DAA) recognized an opportunity to support institutional information and knowledge management through leveraging emerging Industry 4.0 data analytics platforms to modernize data delivery in a project called Academic Reporting Tools (ART). Concurrently, reporting requirements outlined by the institute’s civilian accrediting body shifted to evidence of data-driven decision-making linked to planning priorities, resourcing decisions, and outcomes improvements. In discussions between QAO and DAA, it was determined that an upcoming program review cycle provided an excellent opportunity to foster data-use culture among program managers and specialists in support of institutional decision-making. Crucial to this next step was the development of data literacy among the civilian corps.

Data Literacy

In a general sense, data literacy refers to users able to use data effectively and responsibly. In education, data-literate teachers collect and interpret many types of data (ranging from assessments to student evaluations or behaviors) and use that information to create specific lessons, activities, materials, and tests. Educational administrators must be able to use their knowledge of instructional content (subject matter) and learning (methodology) as they analyze larger-scope data trends to guide programmatic decisions. These types of analyses are new for many professionals, given that opportunities to collect and analyze large amounts of data have emerged fairly recently.7 Mid-level Army civilian program managers involved in the training and education mission must have strong data literacy skills in order to engage in data-informed decision making, meaning they combine the analysis of data with the experience and professional knowledge of educators to inform the decision-making process.8 Recent advances in technology have expanded opportunities for data delivery, simplifying the process of uncovering data insights, and assisting educators and program managers in developing their data literacy skills. Key among these advances are emerging data analytics platforms, which allow organizations to provide visualized data reports to decision-makers.

Data Analytics and Data Visualization

Industry 4.0 data analytics platforms provide the capability to extract, transform, and load data from multiple live-connected data sources to build semantic models that support interactive reports and dashboards to provide visualizations (graphs, lists, rankings, etc.) tailored to specific audience needs. Report users can select filters within the report to aggregate and disaggregate data across multiple dimensions to uncover program insights. Data visualization is most effective when it follows key characteristics. The data should be accurate and easily understood with a clear message using consistent design that provides sufficient context for users; the visuals should encode appropriate and key data, allowing for user interactivity (e.g., zooming, filtering) as needed. Although robust technology tools are crucial for data analytics, staff preparations for digital transitions are equally important and often overlooked in organizations.9

Revising DLIFLC’s Program Review Process

DLIFLC’s QAO initiated the program review revision process using the Army design methodology, beginning with a review of the current state. Because data use was identified as a key objective in the revision, the team consulted academic literature to determine effective means to build data capacity. Social cognitive theory, particularly the concept of triadic reciprocity, where environmental and behavioral factors are improved to build personal self-efficacy, has proven effective in building data capacity.10

Current State

The QA officer collaborated with all program deans and their support teams to assemble a working group to assess the current state by analyzing the clarity of the goals and processes for conducting program reviews along with the strengths and weaknesses of current practices. The main strength of the former program review process was collaboration among stakeholders. However, the team identified several weaknesses, broken down into the following categories: environmental, behavioral, and personal factors. From a broad environmental view, data governance and access were consistent challenges, with language program administrators independently gathering data, leading to wasted time and inconsistencies. Some used data from centralized datasets, provided by request and in PDF format which then had to be manually transferred to spreadsheets, while others compiled their own statistics. Data silos were also a problem, as limited and irregular access to data hindered analysis. From a behavioral perspective, program reviews were not standardized, with inconsistent visuals and metrics leading to unreliable comparisons for senior administrators trying to identify strengths and challenges across programs. Administrators had to interpret raw data sets that were often static walls of numbers, again making trend identification across various dimensions challenging (e.g., outcomes between languages, service member cohort, etc.). There was also a purpose misalignment, given that reviews often became performative, showcasing only strengths. The final weakness area was found in the personal realms, related to data relevance. The previous program review framework did not provide any guidance around a data hierarchy or prioritization to focus on known gap areas to support the identification of resourcing needs.

Mid-level Army civilian program managers involved in the training and education mission must have strong data literacy skills in order to engage in data-informed decision making, meaning they combine the analysis of data with the experience and professional knowledge of educators to inform the decision-making process.

Desired End State

The working group identified the following characteristics of the desired end state for the program reviews: the use of data that was centrally located and accessible in interactive reports for key stakeholders; data reporting that was measurable and consistent across language programs; analysis and conclusions based on the data presented; and ways forward including feasible solutions and clear timelines with accountability measures and prioritized resourcing decisions.

Challenges

The challenges identified by the working group fell into three categories. First, there was an unclear connection between the information provided in program reviews and the needs and interests of the intended audience. Civilian faculty were asked to collate and interpret higher-level data than they were gathering and using at the classroom level; this reality impacted their larger understanding of the purpose of the program reviews. At the same time, mid-level civilian Army managers lacked access to the real-time data they needed in order to make larger programmatic changes. Second, the data literacy skills of the mid-level civilian Army managers were insufficient to (1) discern between qualitative/anecdotal data vs. quantitative/factual data; (2) identify key information from the data provided; (3) identify trends to drive program improvement; (4) move beyond cherry-picking data that showed more favorably on them as educators; and (5) create potential solution pathways based on measurable objectives and with feasible timelines. Third, the institute lacked the necessary technology tool(s) and structures to centrally collect and visualize key data points needed for the program reviews. The operational approaches articulated below address these issues.

Operational Approach (OA) Leading to Solutions

Two OAs were mapped out, starting with the technology tools. The QAO, DAA academic reporting tool manager, and representative stakeholders engaged in an active dialogue on options for reports that provided visualized data, displayed to prioritize key results and trends. This iterative process helped set the stage to provide an environment that encouraged program data exploration. It resulted in the development of standardized reports that were interactive, encouraging users to explore multiple dimensions of their programs. Industry data visualization guidelines and the needs of the civilian faculty drove choices regarding which data to highlight and which visualization types to use based on who would be using and analyzing the data while compiling the program review. This was an iterative process over several months with stakeholder feedback incorporated into each draft visualization.

The other OA addressed the civilian corps faculty’s skills in interpreting the data for the reviews. The QAO, in conjunction with stakeholders, developed a step-by-step guide articulating the purpose and scope of program reviews. The guide covered the format and information sequence for a standardized briefing with qualitative and quantitative data and due outs based on measurable objectives and feasible timelines. To encourage data-use behaviors, the QAO and DAA Academic Reporting Tools manager created a step-by-step plan to coach the academic leadership on how to use the ART reports, common metrics used in DLIFLC reporting, and how to interpret the data to produce data-driven due outs linked to prioritized planning agendas over five out-years.

Detail on the Academic Reporting Tool

In mid-2022, DAA began developing and publishing ART for stakeholders across the enterprise utilizing Microsoft Power BI, an industry leading data analytics platform, which was already funded with an existing Microsoft 365 contract.11 ART allows DAA to provide DLIFLC leadership and other end users with the right information, for the right audience, at the right time. Subject matter experts can conduct much of the heavy lifting in data processing during the extract, transform, and load process. They apply the correct calculations and ensure appropriate visualization at any level of the first three stages of the knowledge hierarchy; raw data or information prepared for analysis or completed reports can then contribute to organizational knowledge.12

For the program reviews, DAA provided the program review teams ART reporting with two categories of data slides: one designed for presentation (briefing), and the second category to supplement deeper program analysis. Each report page was interactive, encouraging users to filter data to uncover trends and insights. The reports highlighted program strengths along with critical insights. Color choice in the visualizations was carefully selected, using industry recommendations, to draw attention toward specific insights on each page. The report pages were designed with dynamic titles and key takeaway comments, highlighting observations from the report visuals. The purpose of the comments was two-fold: (1) they assisted users in recognition and presentation of data story telling elements; and (2) the titles and comments provided a coaching tool to help build efficacy in reading information contained in the visualizations (see figure 1).

Data dashboard displaying key insights on student production and enrollment outcomes across fiscal years, including bar charts and anonymized performance statistics for the Defense Language Institute.

Figure 1. Interactive Dashboard Overview with Key Insight Callouts
(Figure by Albert Stegall)

The institute implemented a three-stage plan to develop mid-level Army civilian program managers’ data literacy and decision-making skills.

  • Orientation: The QAO met with senior leadership to provide an overview of the new program review process and expectations, review standard metrics used for assessment in the program reviews, and how to use ART to analyze and present data.
  • Question/Answer Sessions: Senior OSAE specialists met with each program management team four to six weeks prior to their scheduled program review. The specialists answered questions regarding content, consistency, and organization. They also proactively addressed likely challenges the program management teams would face, such as including comprehensive data about potentially sensitive topics (e.g., teacher evaluations), ensuring that all conclusions and due outs were clearly tied to data, and ensuring that due outs were feasible within the timeline established. Questions regarding ART and data visualization were directed to the DAA Academic Reporting Tools manager, who met with teams as needed.
  • Coaching: Senior OSAE specialists met with each program management team within one week of their scheduled program review to go over the presentation content and delivery. The specialists reviewed slide content for consistency and accuracy, as well as for the quality of analysis. During briefing run-throughs, the specialists used guiding questions to coach teams on how to determine which information should be emphasized and why, how to link trends and due outs, and how to prepare for audience discussions and questions.

Conclusion

The most recent cycle of program reviews was the strongest to date. The use of Microsoft Power BI and the creation of a robust and iterative professional development program for Army civilian faculty and program managers led to reviews that were focused on shared, central data points and that demonstrated reflection and proactive problem-solving. The reviews also led to meaningful discussions on newly apparent issues related to faculty turnover, student learning gaps, and changes in policy that impact learning outcomes. Plans for continued process improvement include ensuring that civilian corps faculty have the data they need for robust analysis and that civilian program managers feel empowered to hold faculty accountable for scheduled due outs. The process of revising program reviews has demonstrated the importance of timelines for ample backward planning in helping meet stakeholder needs, step-by-step guidelines and samples, program manager input and buy-in, and cyclical support and professional development for the civilian corps faculty. Key to the success of this revised program review process is the use of current technologies to collect, share, and present data, as well as the strong collaborations and communication of various offices across the enterprise.

This article has been approved for public release by the Defense Language Institute Foreign Language Center’s Public Affairs Office. For verification please e-mail: mpao@dliflc.edu. Contents of this article are not necessarily the official views of the Defense Language Institute Foreign Language Center, nor are they endorsed by the Department of the Army, the Department of Defense, or the U.S. Government. All third party products/materials featured in the article remain the intellectual property of their respective authors. Use of outside materials is done under the fair use copyright principle, for educational purposes only. The content of this article is the sole responsibility of the authors.

Notes

1. General Catalog 2024–2026 (Defense Language Institute Foreign Language Center, 2021), https://www.dliflc.edu/wp-content/uploads/2024/11/2025-26_General_Catalog.pdf.

2. Hanover Research, Best Practices in Academic Program Review (Hanover Research, 2012).

3. George D. Kuh, Stanley O. Ikenberry, et al., Using Evidence of Student Learning to Improve Higher Education (Jossey-Bass, 2015).

4. Army Doctrine Publication 6-0, Mission Command: Command and Control of Army Forces (U.S. Government Publishing Office [GPO], 2019), https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN34403-ADP_6-0-000-WEB-3.pdf

5. Kuh, Ikenberry, et al., Using Evidence of Student Learning to Improve Higher Education, 40.

6. Ellen B. Mandinach and Kim Schildkamp, “Misconceptions about Data-Based Decision Making in Education: An Exploration of the Literature,” Studies in Educational Evaluation 69 (2021), https://doi.org/10.1016/j.stueduc.2020.100842.

7. Ibid.

8. Kim Schildkamp, Cindy L. Poortman, Johanna Ebbeler, and Jules M. Pieters, “How School Leaders Can Build Effective Data Teams: Five Building Blocks for a New Wave of Data-informed Decision Making,” Journal of Educational Change 20, no. 3 (2019): 283–325, https://doi.org/10.1007/s10833-019-09345-3.

9. Haroon Abbu, Paul Mugge, Gerhard Gudergan, Gerrit Hoeborn, and Alexander Kwiatkowski, “Measuring the Human Dimensions of Digital Leadership for Successful Digital Transformation,” Research-Technology Management 65, no. 3 (2022): 39–49, https://doi.org/10.1080/08956308.2022.2048588.

10. Amanda Datnow, Marie Lockton, and Hayley Weddle, “Capacity Building to Bridge Data Use and Instructional Improvement Through Evidence on Student Thinking,” Studies in Educational Evaluation 69 (2021), https://doi.org/10.1016/j.stueduc.2020.100869.

11. Kurt Schlegel, Anirudh Ganeshan, et al., “Magic Quadrant for Analytics and Business Intelligence Platforms,” Gartner, 20 June 2024, https://www.gartner.com/doc/reprints?id=1-2HVUGEM6&ct=240620&st=sb

12. Field Manual 6-0, Commander and Staff Organization and Operations (U.S. GPO, 2022), https://rdl.train.army.mil/catalog-ws/view/100.ATSC/2DDE6089-23E5-4345-8E9E-7BCD5BDF45C8-1399555122246/fm6_0.pdf; Lorenzo Ardito, Roberto Cerchione, Erica Mazzola, and Elisabetta Raguseo, “Industry 4.0 Transition: A Systematic Literature Review Combining the Absorptive Capacity Theory and the Data–Information–Knowledge Hierarchy,” Journal of Knowledge Management 26, no. 9 (2021): 2222–54, https://doi.org/10.1108/jkm-04-2021-0325.

Authors

Dawn Bikowski, PhD, serves as an advisor to the provost and command group in the Office of Standardization and Academic Excellence for DLIFLC. Her interests include systems thinking, faculty development, and educational technology. She holds an MA in applied linguistics and a PhD in curriculum and instruction from Ohio University.

Erin O’Reilly, PhD, serves as the accreditation and quality assurance officer for DLIFLC. Her interests include change management, evaluation processes, and leadership decision making. She holds an MA in teaching English as a Second Language from Arizona State University and a PhD in education from Northcentral University.

Albert Stegall, MBA, serves as a data manager in the Directorate of Academic Affairs for DLIFLC. His interests include information and knowledge management, change management, and transforming data to inform decision making. He holds an MBA from Colorado Technical University, an MAR from Liberty University, and is currently pursuing a PhD in organizational leadership from Liberty University.