Tämä teos on lisensoitu Creative Commons Nimeä-EiKaupallinen-EiMuutoksia 4.0 Kansainvälinen -lisenssillä.
This article describes the starting points and basic principles of assessing the reliability of RDI activities. Assessing reliability is important in qualitative and quantitative research and development work.
Research and development work, both scientific and development-oriented, is bound by common concepts on information and principles of various information production methods. Development work emphasizes practical problem-solving and evaluation of the applicability of the results. A reliability assessment focuses on the entire research or development process, its consistency and rationality. Consistency refers to the logical whole formed by the basic structure of the phenomenon under study, the research material, the approach, the methods of analysis, the presentation of results and the drawing of conclusions (Vuokila-Oikkonen 2001, 2003).
In theses, the process of collecting material for the development, the documentation methods and the analysis methods are essential. In terms of reliability and the results of the development work, it is important to identify the target of the work, what information is relevant to it, how that information is collected, who participates in the collection and how different collected materials are processed and interpreted. This should be done in the planning stage. Dissemination and application of the results of the development work also require accurate documentation on the work.
The validity of research and development work traditionally refers to the ability of a research method to prove what it is intended to prove.
Validity provides information on
1. How well the results correspond to reality, and how correct and generalizable they are.
2. How the operationalization of the concepts, i.e. the connection with the phenomena under study, has been implemented.
In principle, calculating or evaluating validity is easy: the measurement result is compared only with verified information on the phenomenon being measured. This way of thinking is related to the traditional positivist view, in which research methods are used to seek truth only through empirical observations, experiments and measurements (Anttila 2006, 512).
Validity assessment is about how suited the research approach and the methods used are for studying the phenomenon that is the subject of the research. In order to be valid, the applied research approach must respect the nature of the phenomenon under study and the research question. However, from the point of view of validity, the most important thing is not to think about which indicators are used to get results (i.e. which indicators are valid), but to find out what kind of research strategy is valid. The method used in the research work does not in itself lead to new information. Instead, the method must be chosen according to the type of the information desired. This is the first thing a researcher must consider while selecting the research met
In qualitative research, the reliability assessment focuses on the collection of research data, data analysis, and research reporting. Criteria for the reliability of qualitative research are truth value, applicability, consistency, and neutrality (Tynjälä 1991). In addition, different approaches and methods of qualitative research (e.g., the narrative approach, a method that utilizes narrative and storytelling) have their own reliability criteria that should be used.
The reliability of qualitative research concerns the collection of research data. Reliability is increased if the material is collected from the same environment where the phenomenon occurs. The material must be based on the principles of representativeness. The report should detail the stages of the study. If the material is collected through interviews or open-ended questionnaires, for example, the themes or questions used are recorded in the report. A researcher’s or developer’s interview diary improves reliability because it makes it possible to distinguish one’s own feelings in an interview situation. The interaction of the interview situation and the factors that influenced it should also be assessed, as well as the factors that may have influenced the responses received. The time spent on the interview and/or observation and its adequacy are also assessed.
The report should include enough direct quotations, i.e. qualitative material, for the reader to follow the analysis and make an assessment on what it was based. The criterion of reliability is that the generated codes, i.e. the meanings identified and structured from the data, are mutually exclusive.
Central to assessing the reliability of an analysis is the researcher’s/developer’s ability to think abstractly. The results are assessed in relation to previous research, i.e. in relation to how diversely the phenomenon has been discussed. The reliability of reporting also requires writing skills: it is important to be precise in the use of the key concepts generated from the results, and report on the analysis in an easily understandable way. The criterion of consistency means that the researcher and/or developer has been able to create a meaningful and summarizing idea of the phenomenon under study.
More information about the reliability of qualitative research (coming)
The concept of reliability is usually used in connection with quantitative studies. Reliability of an indicator or a method refers to the ability of a research method and the indicators used to give non-random results and to confirm the consistency of the results (Anttila 2006, p. 515–517). Consistency means that a measurement provides the same result when repeated. The reproducibility of measurement results is good when the measurement gives the same result regardless of the situation or measurer. Methodological accuracy is measured, for example, with Cronbach’s alpha, which measures the operational consistency of the concepts falling under the various components of an indicator.
Reliability is improved by carefully planning the research and selecting the research method and conditions, and by controlling possible error factors by choosing, for example, a time that is free from interference. Reliability is improved by selecting the research subjects by random sampling, in which case all instances of the phenomenon under study are numbered and a table is used to select the necessary number of instances for closer examination.
Learn more about the reliability of the quantitative method (coming)
The aim of theses in universities of applied sciences is to produce, develop and renew working life practices. The starting point and framework for development work can be, for example, participatory action research, in which key players from the point of view of development work are involved in the development. In assessing the reliability and usefulness of participatory action research, the assessment criteria are justified by the key commitments of the approach chosen. In action research, the different and varied opportunities for the participants to be involved in the different stages of information production are central. From the perspective of reliability, extent to which the participants were involved during the process is assessed. Participants can be business sector operators, customers and private individuals. The appropriateness of the participatory methods and the chosen development methods are assessed. In addition, the changes resulting from the development work are assessed. Everyone involved in the development project participates in the change assessment. The assessment focuses on the documentation of the development project and on determining whether the various phases of the project have been sufficiently documented.
Both qualitative and quantitative methods can be used in development work, and the criteria for assessing reliability are determined by the methods chosen. The assessment focuses on the objectives, the suitability of the selected methods in relation to the objectives, and the use of time.
Susanna Hyväri ja Päivi Vuokila-Oikkonen (2016, updated 2020)