Article Open Access July 23, 2023

Assessing Observing Skills of Biology Students in Selected Senior High Schools

1
Department Science, Ghana Education Service, Takoradi, Ghana
2
Department of Science, Peki College of Education, Peki, Ghana
3
Department of Science Education, University of Cape Coast, Cape Coast Ghana
Page(s): 135-152
Received
May 23, 2022
Revised
November 21, 2022
Accepted
February 14, 2023
Published
July 23, 2023
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
Copyright: Copyright © The Author(s), 2023. Published by Scientific Publications

Abstract

The purpose of the study was to design and develop performance-based tasks to assess laboratory observing skills of biology students in senior high schools. The target population was all students in the nine schools within Sekondi-Takoradi Metropolis reading biology as an elective subject. The accessible population was 753 SHS 2 biology students in six schools. 261 students were randomly selected from each of the six schools. These schools were of three different types, single-sex males, single-sex females, and mixed. Mean, standard deviations, frequencies, and percentages were calculated while independent sample t-tests were performed. No significant difference was noticed in levels of proficiency shown for males and females in the various schools surveyed. It is recommended that students from all types of schools and both sexes must be given an opportunity to engage in more activities at the SHS level to sharpen their observing skills.

1. Introduction

Science as a discipline involves both “minds-on” and “hands-on” activities; thus both cognitive and the psychomotor domains. An up-and-coming scientist should therefore be equally developed in order to produce a well-balanced scientist. Most nations have realized the importance of science as a tool for development and are therefore going scientific. In this vein a researcher stressed that there is need for the products of Ghanaian schools to be well equipped with the modern scientific outlook [1]. How can this be done if the teachers themselves are ignorant of the modern scientific outlook to assist students to develop their psychomotor and cognitive skills for the task ahead of them? Indeed, the very nature of the scientific method employed by scientists worldwide requires scientists to carry out experiments in their bid to find solutions to existing or perceived problems. These experiments are generally referred to as laboratory based practical work.

Ghana as a country places importance on science practical work that is why the Ministry of Education since 1995 established the Science Resource Centre (SRC) project in order to provide all Senior High School (SHS) students a special opportunity to build capacity as they study science for efficiency in the field of science and technology. Science equipment and chemicals valued $8.5 million were given to the Ghana Education Service (GES) in 2012 for distribution to some selected 200 Senior High Schools (SHSs) across the country by the then Minister of Education, Prof. Naana Jane Opoku-Agyemang to enhance the Science Resource Centres (SRCs) of these selected schools. This was followed up with a four-week intensive training of teachers in investigative science teaching. It is worth noting that a close examination of the elective biology syllabus 2010 edition, under the heading ‘time allocation’, it is recommended that out of the total of six periods per week, of the teaching periods should be divided as:

  • Theory - 3 periods per week (two 40-minutes periods)
  • Practical - 3 periods per week (three continuous periods of 40 minutes each)

This means that 50% of the time allocated for teaching has been assigned to practical work.

Furthermore, the syllabus subscribes to the concepts of profile dimensions and asserts that it should describe the underlying behaviour for teaching, learning and assessment. In the sciences including biology, the three profile dimensions that have been specified for teaching, learning and testing are: Knowledge and Comprehension 30%; Application of Knowledge 40% and Practical and Experimental Skills 30% [2]. The given percentage weight of each dimension should be inculcated in teaching, learning and testing. The weights indicated in percentages, show the relative emphasis that the teacher should give to the teaching, learning and testing processes. The syllabus has the main emphasis of not only getting students to acquire knowledge but also to be able to comprehend what they have learnt and practically apply them.

Emphasis continues to be put on practical work by modern school science curricula for various reasons including the concretization of abstract concepts and development of relevant process skills. The process skills in science which are necessary for effective learning of science and technology as well as individual and societal development indicate mental and physical abilities and competencies [3]. The American Association for the Advancement of Science (AAAS) put the science process skills into 15 categories [4]. These include the following: observing, measuring, classifying, communicating, predicting, inferring, using numbers, using space/ time relationships, questioning, controlling variables, hypothesizing, defining operationally, formulating models, designing experiments and interpreting data.

Biology, like some others, is an elective science subjects offered at the SHS level in Ghana. The elective biology syllabus summarizes the process skills that are required for effective practical and experimental work as follows: equipment handling, planning and designing of experiments, observation, manipulation, classification, drawing, measuring, and interpretation, recording and reporting. The integrated science syllabus also stresses that students must be involved in hands-on science activities that promote scientific thinking and reasoning.

In an attempt to perform well in the West African Examinations Council’s (WAEC’s) West African Senior Secondary School Certificate Examination (WASSSCE) Biology Paper 3, one needs to demonstrate adequate competence in the process of observing and reasoning which are some of the process skills stipulated in the elective biology and integrated science syllabuses that learners must acquire.

During biology practical examinations the usual trend of the questions are study carefully a number of specimens, identify them, give a number of observable features of the specimens, give a number of similarities and differences based on the observable features of the specimens and give a number of observable adaptations to their habitats. Sometimes it appears the students lack these process skills as they often give answers based on what has been taught in theory instead of giving answers based on the use of their senses.

Students demonstrating the skill of observing must be able to use the senses effectively so as to get correct and detailed information. The students, in this case are expected to identify the colour, form, texture and structure of specimens provided and be in position to classify them.

It is an established fact that assessment format employed has a great deal of influence on students’ mode of learning [5, 6, 7]. The outcome of testing students’ practical skills, is noticeably separate from what is tested through conventional paper-and-pen test. Researchers have shown that an activity-based approach is key to science achievement [8, 9]. By design, performance assessment better measures students’ activity-based learning and abilities than paper-and-pencil tests. When pre-service teachers go on teaching practice they are assessed using performance assessment to find out if they exhibit or demonstrate the skills they have been taught [7, 9]. The same applies to nurses on clinical.

Researchers defined performance-based assessment in two ways as an assessment that: (a) requires students to do activities that involve the application of their knowledge and skills gained from several learning targets and sources and (b) makes use of clearly defined criteria to assess students’ achievement in practical application. Therefore, performance-based assessment requires students to do something with their knowledge or skills i.e. produce a product/ report or demonstrate a process [7, 10, 11, 12].

A considerable amount of work had been done in performance assessment in Ghana. A researcher worked on elective biology students in the Central Region of Ghana and assessed students in the skills of interpreting, inferring and predicting. He found out that students’ interpretation of biological data was low [13]. A similar study assessed observing skills of JHS (Junior High School) students in the Ashiedu Keteke sub-metro of the Accra Metropolis. His finding was that students found it more difficult to detect differences than similarities when confronted with tasks [14]. Another study on the proficiency level of planning, performing, and reasoning skills exhibited by JHS2 pupils of Cape Coast Metropolis, revealed that students exhibited low levels of proficiency of skills in reasoning tasks [15]. Another area that deserves attention is gender and academic achievement since issues of gender have been an area of interest and concern to researchers and education experts. A number of mixed reports exist with regard to this topic. Achievements in science are not distinctly influenced by gender [16, 17]. Researchers indicated that males tend to dominate science and technology, hence relegating the few females left in the scientific and technological fields to the back [18, 19]. Some other researchers have pointed to new findings indicating that males are becoming less interested in school science as evidenced by fewer boys studying science-related subjects [20, 21].

The discussion of issues bordering on the influence of gender on science achievement has still not been sufficiently exhausted as it remains a controversy in the scientific community. A lot more studies into the influence of gender on students’ science achievement should be carried out.

School types (single-sex and co-educational) are yet another area of interest and concern to the scientific community. A study came out with a finding which indicated that girls in single-sex schools, recorded the highest average scores in all the selected practical skills, excluding measurement skills in laboratory group work [22]. The assertion is that girls in single-sex groups in most cases acquire and develop better practical skills than their mates in mixed-sex schools. In conclusion, it has been established that good performance in practical skills does not only depend on sex but the type of school one attends whether single-sex or mixed school.

The biology syllabus is designed to help students develop science experimental skills needed for easy manipulation and handling of science equipment, biological material and living things. A number of Chief Examiner’s Reports for Biology has consistently indicated the following weakness by most biology students:

  • Answers of candidates should be based on specimens provided only. This is due to a lack of observing skills [23].
  • Power of observation was very poor [24].

One cannot understand why students still have problems with observation. This has necessitated the need to assess the observing skill of SHS biology students to determine whether they have the skill of observing that is needed to pass the examination.

Most science educators assume that students at the senior high secondary (SHS) level do not exhibit or show a satisfactory level of competence in the development of the f skill of observing when confronted with practical issues in the laboratory [25, 26]. This can be attested to by the Chief Examiner’s Report [27]. From the reports, it is clear that the poor WASSCE result in biology can be attributed to a lack of competence in the skills of observing. The study was guided by these research questions: (1) Are biology students presented with monocotyledonous and dicotyledonous leaves able to observe the major differences and similarities between the two leaf types? (2) Is there any statistically significant difference between male biology students in boys’ schools and female biology students in girls’ schools in demonstrating the skill of observing? (3) Is there any statistically significant difference between male biology students in demonstrating the skill of observing in single-sex schools and those in mixed schools?

1.1. The Concept of Assessment

For the nation to achieve its goals of producing scientifically literate citizens, the way students are assessed should be carefully considered. The value of a school graduate is a product of effective teaching and good assessment [6, 26, 27]. The outcome of the practical skills test is clearly separate from the test conducted with conventional paper and pen. For some time now, there has been a lot of dissatisfaction with the products of various educational levels. Secondary school teachers are wondering whether graduates of the JHSs who come to secondary schools merited the grades they have. University lecturers are not sure what students who come to universities were ever taught. Employers cannot understand first-class students cannot translate what they were taught in school on the field. The problem lies in the fact that most often the grades are made just by memorization of facts [28]. Many educators are now weighing the effectiveness and considering the use of alternative assessment methods to evaluate the learning outcomes and achievements of students [29, 30, 31, 32, 33]. This has necessitated the need to assess the laboratory skills of SHS elective biology students to determine whether they have the skills of observing and reasoning or not.

The concept of assessment learns itself to different views and theories. The terms and definitions of assessment according to scholars, authors and curriculum experts are plethoric [6, 32, 33, 34].

Assessment is defined as the process of gathering information for the purpose of making decisions about students, curricula programmes and educational policy [6, 10]. He further indicated that gathering information with the intention of making a decision on students’ achievement is an assessment of students’ competence. Decisions about students include choosing them for academic opportunities, grading and certifying their academic performance. Assessment is also the process of identifying whatever lapses or gaps are left out between what was taught and what was learned by students [5]. When the teacher has been able to identify the problems and difficulties encountered by the student in the instructional process, then this provides the platform for the teacher to gather information to address these difficulties. The assessment may help the teacher to collect data on students learning behaviour and achievement level. The assessment also includes any approach to better determine the amount of recent knowledge that a student has acquired or all teaching and learning activities that are performed to get information that can diagnose and change teaching and learning [5, 35, 36, 37] Therefore assessment is the only way to help the teacher to find out whether the students are learning or meeting the instructional expectations.

Assessment could be carried out prior to instruction by the teacher to ascertain how much knowledge input students have brought to a new learning situation, and how ready they are for new learning. The approach to obtaining such information is called a diagnostic test or test of entry behaviours [32, 36, 37, 38]. During instruction too, it could be done to help students to learn more. Formative assessment is there to check the level of students understanding. A summative assessment is a systematic approach through which a teacher gathers enough information in a planned and logical way about what students have learned over an instructional period in order to draw inferences about students’ achievement and to provide an effort that reflects each student learning [7, 36, 37, 38, 39]. Essentially, the assessment provides comprehensive information on the students. The knowledge and information which is worth learning by the students should be carried by the goals and objectives [6, 28].

However, learners, teachers, and administrators found assessment very instrumental and essential [6, 26]. From the perspective of learners, it gives room for efficient learning by directing students’ attention to what is deemed very important. Assessment enhances retention of knowledge and possible transfer of learning, motivates learning by presenting a clear picture of student’s learning progress, and shows proof of works that open up opportunities for jobs and scholarships for further studies. It can also aid student self-assessment by providing them with information which is a more objective basis for assessing their own strengths and weaknesses [11, 40]

For the teacher assessment can be used to ascertain how well students possess the skills and needed abilities to begin instruction; enables the learning tasks on which students are making satisfactory progress to be known; and those on which they are not making satisfactory progress to be determined. Moreover, the assessment allows students who are facing learning difficulties to be identified so as to remedy the condition; as well as allow students who have acquired the skills needed to handle the learning tasks to the extent that they can move on to the next stage of instruction to be identified [11, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40] It again enables grade to be assigned each student to be determined; allows students who should be awarded certificates to be known.

To administrators, assessment can help in evaluating instructional effectiveness, by determining how much of the instructional objectives were achievable, whether the instructional methods and materials needed were appropriate and suitable, and how sequential the process and the learning experiences were. Good decisions about teaching need to be made. High-quality information is necessary if decisions are to be accurate, valid and fair to students. It is only through high-quality assessments that high-quality information is obtained [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40].

1.2. The Concept of Performance Assessment

Performance-based assessment is broadly seen as a kind of test that asks students to carry out an activity or a task instead of being asked to tick the correct answer from the options provided [7, 41]. Performance-based assessment is designed for practical-based tasks that get students busy with “hands-on” and “minds-on” tasks leading them to develop their knowledge of (science) [7, 42]. They can also be described as tests that require students to use skills acquired to directly produce a project or do some practical activity [6, 43]. Performance-oriented assessments are designed to test higher-order tasks which have a direct bearing on students’ learning process in the classroom and what is practised in the scientific world. These included observing, hypothesizing, recording, inferring and generalizing [36]. In science classrooms, performance assessment evaluates students on how well they can use their skills to carry out specific tasks and the kind of product that is churned out at the end of the learning process.

Performance assessment introduces students to a kind of testing which is more realistic and pragmatic than the popular pen-and-paper essay test, and, by asking them to do hands-on activities relating to realism, in attempts to draw outperformance which is a valid indicator of students’ understanding of concepts and possible application outside the school environment [11, 44].

Performance-based assessment has captured the attention of educators and policymakers for a number of reasons. It presents recent trends in activity-based teaching and learning as well as enquiry methods of learning in many countries; and it is purposefully designed as a type of assessment that is educationally and psychologically grounded in “constructivist” pedagogies [28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. It also clarifies the meaning of complex learning targets. Again it requires the integration of knowledge, skills and abilities.

Other researchers claimed that the target of science in educational reforms comes along with quite a great deal of change in performance assessment [46]. This comes as a result of the fact that standard reforms approach to science gets students to be actively engaged instead of responding to reading comprehension in science. Many educationists hold the view that performance-based assessment is basically not simply a method of testing students’ learning outcomes, but a teaching and learning means that is expected to present their approach and explanation of information that leads to finding solutions to everyday problems. [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]. Performance assessment encourages practices that focus on student-centred activities rather than the ones that allow teacher-dominated techniques [7, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48].

In Swaziland, it was found that students only encounter practical-based assessments at the senior level. That was not helping their students to develop cognitively. To expose the junior science students to performance assessments an exploratory study was conducted [49] The tasks were reigned to direct students’ attention to demonstrating knowledge and procedural skills through planning, investigating and recording, analysing and interpreting data, and applying the data in a given situation.

Science provides a good learning environment which makes it possible for students to do inquiry and critical thinking; thus, it naturally supports performance-based assessment [41]. According to Science performance assessment is conceived as a blend of (i) a purposeful task that attracts the usage of concrete objects for a solution which is responsive to students’ actions; (ii) an organized presentation of a student's reply and answer and (iii) a system of scoring for the correct answer as well as evaluating the appropriateness of the approach to handling the task [50]. Performance assessment cannot be properly defined without all these three. Because performance assessments demand that students demonstrate understanding and capability in a task they have been actively engaged with, they can be difficult and challenging to design and execute [7, 41]. However, if well planned, performance assessments can show clearly what students’ level of knowledge, performance and capabilities are, by simply asking students to carry out some tasks, such as generating scientific hypotheses or conducting experiments. Performance assessment tasks will have to be properly designed so as to help students achieve the intended learning outcomes. Constructing good performance assessment tasks requires a lot of time. One should consider the age, class, topic, time allotted, instruments, wording appropriateness, and the purpose of constructing the tasks. Several trial runs with students to get their inputs are necessary before the tasks can be used for the actual assessment [7, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. These authors again stated that if the intention is to improve teaching and learning, then well-planned performance-based assessment tasks are very important. Studies bordering on performance assessment stated that, if the examples are readily accessible to indicate students’ levels of achievements and capabilities and the criteria is not ambiguous, then performance-based assessments could be said to be uniform and consistent regardless of the number of different assessors handling the assessment [28, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52]

One clear challenge that is associated with the use of performance assessment is the mode of scoring students’ responses. In performance assessment, there is the need to have a guide such that the actual skills needed to be assessed will be relevant. The assessor should state clearly his or her instructional goals and learning outcomes, plan performance assessment tasks, design the scoring format, conduct performance-based assessments and scoring, and give interpretations based on the test outcome. The effectiveness or value of these approaches borders on the appropriate use of suitable well-designed assessment tasks and scoring formats. A set of explicit standards by which a work will be judged is referred to as a rubric [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53] When the task is given to learners, it takes experienced assessors, such as teachers and trained staff, to examine the correctness and appropriateness of students’ work bearing in mind the acceptable standard for judgement. Whenever a task is given to a student, he or she is expected to show the degree to which he or she could exhibit the skills on the task at hand.

When a task is given to a student, the student is required to read the instructions carefully for better understanding and what he or she is supposed to do. Questioning and answering promote participatory learning, good communication skills and confidence building in the learning process. A good way of engaging students is by asking thought-provoking questions, engaging them in a worthwhile activity, and prompting them to answer or ask questions [11].

A number of teachers are not comfortable with the use of performance-based assessment in the classroom. Primarily, these teachers who do not engage their students in performance assessment think they do not have enough of what it takes to conduct and assess students’ assessment performance [7, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]. Previous unsuccessful attempts with accompanying challenges are yet another reason some teachers would not carry out the performance-based assessment [7, 55]. When learners are asked to produce their learning outcome in the form of results, there is the possibility of learners showing little or no evidence that they have followed the expected process. Again insufficient knowledge of their use by teachers to fairly assess students’ performance, unsuccessful experiences and or inconclusive executions of performance assessment are thought to be responsible for their poor acceptance by teachers [7, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]. Performance assessment in general fit into the constructivist paradigm.

2. Materials and methods

2.1. Research Design

The research design was a survey. Surveys gather data relating to the current status of a phenomenon [57]. In this study, the phenomenon in question was that biology students in SHSs in Sekondi-Takoradi Metropolis exposed to practical work exhibit minimum competency in observing. The survey design is chosen because it allows the researchers to have access to a broad scope of behaviours which can be observed freely in their natural environment [58]. Another advantage of the survey design is that it allows one to choose a large sample size in a rein relatively quicker and inexpensive way [59].

2.2. Sample and Sampling Procedure

The respondents were from a sample of six selected senior high schools (SHSs) which have elective biology on offer as elective science subjects to students within the Sekondi-Takoradi Metropolis of the Western Region of Ghana. This will make the data gathering practical in terms of cost and time [60]. Again the sampling was purposively done because all the four single-sex schools chosen offer biology as a subject to their students but only the two mixed schools chosen offer biology as a subject to their students. Purposive sampling gives room to researchers to deliberately choose individuals and places for the study [61]. Purposive sampling supports the deliberate inclusion of cases in the sample after careful assessment [62]. Sekondi-Takoradi was selected for the research due to its nearness and convenience to us. Out of the six selected SHSs, two were boys’ schools labelled as schools A and B, two were girls’ schools labelled as schools C and D and two were co-educational, labelled as schools E and F. The research questions suggest the types of schools to choose for the study. All six schools had between two to six biology classes with a maximum class size of 50. School A had six biology classes, school B had three elective biology classes, school C had three biology classes, school D had two biology classes, school E had two biology classes and school F had two biology classes. Computer (MS Excel) generated random numbers were used for sampling the classes in the selected schools. From the six schools involved in the main study, the approximate total population was around 753 students (i.e. those offering elective biology in SHS 2 in the selected schools). The research involved 261 respondents representing 34.7% of the accessible population and this is far above the minimum of 10% of population that should be involved in a research study of this kind [63]. Again the motive behind the use of large sample size was to allow us to state, with a certain level of confidence that findings using the sample would also be found in the population [64].

The respondents were made up of males and females from three different types of schools, namely: Boys only; Girls only and Mixed (both males and females) schools. The sex distribution of the total sample was 140 males and 121 females. The total samples from the types of school were: boys 92, girls 80, and mixed 89. More boys took part than girls in this study. This is because the total number of girls in the two mixed were fewer than the total number boys in the two mixed schools. Again the total number of girls in the two girls’ schools were fewer than the number of boys in the two boys’ schools put together. In one of the girls’ schools the class randomly selected for the study was a mixture of elective science students and agricultural science students, making the number of biology students less. The above reasons contributed to the population of boys being greater than that of the girls.

The breakdown of the types of schools involved in the study:

2.3. Validity of Instruments

It is very important to consider validity in attempt to develop and evaluate measuring instruments [59]. Validity determines if an instrument measures what it intends to measure. Four types of validity have been distinguished, namely: content validity, concurrent validity, predictive validity and construct validity. Content validity has to do with the sampling of a specified universe of content. Concurrent validity has to do with the relation of test scores to an accepted standard of performance on the variable which the test is meant to measure. Predictive validity lends itself to the determination of the connection of test scores to a criterion based on performance at some later date; while construct validity is how much inference on certain constructs in a psychological theory can be made from the test scores [10]. Validity may be determined by curricular approaches or by statistical analysis. Therefore, to ensure the validity of the performance assessment tasks experts’ opinions were sought, as well as references from textbooks made to ascertain the face, content and construct validity of the instruments as consensus from the experts’ opinions of the team of supervisors and textbook writers concerning important objectives, skills, and content in performance assessment, adequately defined the validity of the instruments [65].

2.4. Reliability of Instruments

In performance assessment, it is significant to look at the reliability of the instrument itself, inter-rater reliability, and where to cut-off scores [41]. Pilot-testing of the instruments was done in three selected senior high schools within Cape Coast Metropolis prior to the main research to check the reliability of the instruments. Reliability is a matter of degree as no procedure is absolutely reliable. Reliability is usually represented in terms of correlation coefficients. The reliability of the instrument itself, inter-rater reliability, where to cut-off scores, and how to deal with scores, are certain dimensions of reliability which are very essential to performance assessment that comes close to the baseline point [41]. Pilot-testing of the instruments was done in three chosen senior high schools of Cape Coast Metropolis prior to the main research to check the reliability of the instruments. Reliabilities of the instruments were improved through the use of diagrams to clarify the tasks. Clear directions on the tasks administration were given to students to remove ambiguities and help improve the reliabilities of the instruments. The researchers personally administered the tasks to respondents to ensure that each student was given fair and adequate time and resources to complete the tasks. This further improved the reliability of the instruments greatly. Scoring formats (schemes) were used to score the responses of the respondents to ensure uniformity in scoring which also improved the reliabilities of the instruments.

3. Results and Discussion

3.1. Biology Students Major Differences and Similarities between Monocotyledonous and Dicotyledonous Leaves

This section presents results and discussion on research question one: Are biology students presented with monocotyledonous and dicotyledonous leaves able to observe the major differences and similarities between the two leaf types? To determine this, the respondents were made to perform a task and the skills exhibited by the students were scored using the scoring rubrics. A credit of one point was scored for the correct response showing the students proficiency and zero point scored for wrong response showing that the respondents were not proficient in that particular skill. Respondents were considered to be very proficient according to the number of skills exhibited in the tasks. In all, the total score under the observing skill was 14. When a respondent scored eight and above in the skill of observing, then that student was regarded as proficient. On the other hand, if a student scored between zero and seven, then the student was not regarded as proficient in the task. The mean mark + 1 was used because according the literature read almost all the researchers who carried out performance assessment research used the mean mark plus one. [15, 18]. With a mean of 9.16 (SD = 2.27) out of a maximum of 14, almost 73.4% of the respondents involved in the study exhibited competency in the observing task. The results of the performance of students in the observing task are shown in Table 5.

From Table 5, out of the total number of 261 respondents, 4 scored 3 points out of total of 14 points representing 1.5%, 3 respondents scored 4 out of 14 points, representing 1.1%, 10 scored 5 points out of 14 representing 3.8%, 15 respondents representing 5.7% scored 6 points and 27 representing 10.7% scored 7 out of 14 points. In all 59 respondents representing 22.6% had less than 8 marks out of 14 marks.

Two hundred and two respondents representing 77.4% had more than seven marks out of the total of 14 marks. Thus, majority of the respondents were proficient in the observing skill whilst the minority of the respondents was not proficient. Table 6 shows the summary of the students’ performance on the observing task.

The performance of the respondents in the study in general was commendable. Two hundred and two respondents representing 77.4% were said to proficient in the Task I, (observing skill). For students to observe well, it must begin with careful and systematic planning. Students would have to plan what to look for, when to look for it and where to do the observation. Also for students to observe properly, the tasks of the students must be well defined so that the purposes are not confusing. The tasks should again be based on familiar things in the school, home and immediate community of the students. Therefore, the competency shown by the majority is due probably to the familiar object used i.e. leaves. Another probable reason that could account for the majority showing proficiency was that the scoring rubrics contained 40 items out of which the respondents were expected to give only fourteen points.

The mean score in the observing task was 9.16 (SD = 1.52) out of a maximum of 14. Taking the mean scores into account just by observation, one may be tempted to conclude that the students exhibited proficiency in their skill of observing but this is not entirely accurate. As a matter of fact, the positive results seen do not mean the students did not show their naivety in the tasks, they did. There was a major setback on Task I (observing task). Only three respondents representing 1.1% out of 261 respondents give all the fourteen expected responses in the observing task scoring total marks of fourteen out of fourteen (14) marks. The reason for this number of respondents’ percentage mark might be due to inadequate practical lessons that employ observing skills at the senior secondary school level.

3.1.1. Students’ Responses in Observing Task

Under the observing task (Task I), students were required to observe differences between leaves A and B and between leaves B and C and similarities among leaves A and B and leave B and C. Six answer sheets of the students were selected: three for exhibiting proficiency of skills and three for not exhibiting proficiency in the observing. The selection was based on the type of school i.e. two from boys’ school, two from girls’ and two from mixed schools. Out of the two answer sheets from each type of school, one exhibited proficiency of skills whilst the other did not exhibit proficiency of skills in the observing task. Of the three students who exhibited proficiency in skills, two were able to give all 14 expected responses on the task, and another student give 13 out of the total of 14 expected responses on the observing task. For the three students who did not exhibit proficiency of skills, they all got three responses out of the total of 14 expected responses on the task.

3.1.2. Pattern of Responses on the Observing Task

The task I had four major components. These were differences between leaves A and B, similarities of leaves A and B, differences between leaves B and C and similarities of leaves B and C. The first major component thus differences between leaves A and B had about 12 responses out of which the respondents were expected to give any 5. The second major component: similarities of leaves A and B had about 7 responses out of which the respondents were expected to give any 2. The third major component thus differences between leaves B and C had about 8 responses out of which the respondents were expected to give 5 and the fourth major component: similarities of leaves B and C had about 8 responses out of which the respondents were expected to give any 2. In all the scoring rubrics had about 35 answers out of which the respondents were expected to give 14. The distribution of credit for Task I is found in Table 7. The "component assessed" was the individual item assessed on the student answer sheet given under the four major components. The details of these items and the scoring format are found in Appendix A and B respectively.

From Tables 7 and 9, five items were assessed each but there were 12 expected responses for Table 7 and 11 expected responses for Table 9. Some of the responses were very popular and well answered, others were popular but not well answered and yet others were very unpopular. Some of the popular and well-answered ones included the edge of leaves the and length of the stalk. Out of the 175 respondents who talked about the edge of leaves 141 respondents representing 80.6% had it right whilst 34 respondents representing 19.4% had it wrong. On the length of stalk, out of the 100 respondents who talked about it 94 respondents representing 94% had it right and 6 representing 6% had it wrong. Some of the popular but not well-answered ones included the venation of leaves the and nature of leaf blades. Out of the 164 respondents who touched on the type of venation, 125 of them representing 76.1% and 39 representing 23.9% had it wrong. On the nature of blade surface out of the 148 respondents who talked about it 115 respondents representing 77.7% had it right and 33 respondents representing 22.3% had it wrong. Ligule i.e. Leaf extension between the leaf blade and stalk and the colour the of leaf blade were unpopular. For instance, only 1 respondent talked about the ligule and 5 respondents about the colour of the leaf out of which 2 i.e. 40% had it right and 3 i.e. 60% had it wrong. Even though both leaves A and B were dicotyledonous and thus both had reticulate or net venation a lot of respondents differentiated their venation.

The similarities of leaves A and B, and B and C did not pose any problem since respondents just used the labelled parts to bring out the similarities of the leaves. The respondents for instance instead of stating that both leaves A and B had pointed apex or both leaves A and C had smooth margins, they would just write both leaves A and B had apex and both leaves B and C had margins. A summary of students’ responses in the observing Task regarding the differences between leaves A and B have been shown in Table 7.

Summary of students’ responses in the observing Task regarding the similarities leaves A and B has been shown in Table 8.

Summary of students’ responses in the observing Task regarding the differences between leaves B and C has been shown in Table 9.

Summary of students’ responses in the observing Task regarding the similarities leaves A and B has been shown in Table 10.

3.2. Difference between Male and Female Biology Students in Demonstrating the Skill of Observing

This sub-section presents results and discussion on research question two: Is there any statistically significant difference between male biology students in the boys’ schools and female biology students in girls’ schools in demonstrating the skill of observing? Research question two was formulated with the intention of finding out whether there is a significant difference between male biology students in boys’ schools and female biology students in girls’ schools in the skill of observing. To seek answers to this, the independent-samples t-test was employed to ascertain the presence of any significant difference between the level of proficiency exhibited by male and female respondents from single-sex schools in the skill of observing. The results of the independent-samples t-test analysis on the student’s proficiency in the observing skill are presented in Table 11.

Table 11 indicates that there was no significant difference in scores for male biology students in boys’ schools (M=8.75, SD= 2.13) and female biology students in girls’ schools (M=9.41 SD= 2.61) with t (170) =1.80, p= (0.073). A look at the means indicated that the female respondents scored higher than the male respondents but this difference was not statistically significant thus both male and female respondents were showing the same level of competency in the skill of observing. The outcome implies that even though males observe well females can do better and more critically. One reason that could be assigned to the low performance in the skill of observing of the male respondent was that most of them observed trivial issues such as spots on the leaves and insects on the leaves. Again there were fewer female respondents compared to the males. Also, some of the male respondents wrote about features that could not be observed externally but internally. This implied that they answered the practical question based on the theory they had learnt [23]. This situation can be attributed to inadequate practical work done in most of our schools.

This study is consistent with a similar study in that females perform slightly higher than their male counterparts even though the difference was not statistically significant [14]. The current study was supported by an earlier study that there is a gradual improvement in the performance of girls in recent years [66]. There has always been the call to encourage females to do better in the sciences but in doing that we should be careful not to leave the males behind. The finding of this study however is inconsistent with previous research that males had a higher mean score than females even though not statistically significant [18].

3.3. Difference between Female Elective Biology Students in the Skills of Observing in Single-Sex Schools and Those in Mixed Schools

This sub-section also presents results and discussion on research question three: Is there any statistically significant difference between female elective biology students in the skills of observing in the single-sex schools and those in mixed schools? Research question three sought to find out whether there is a significant difference between female elective biology students in the girls’ schools and female elective biology students in mixed schools in the skills of reasoning.

To find answers to this, the independent-samples t-test was used to ascertain whether there were any significant differences between the level of proficiency exhibited by female elective biology students in the skill of observing in the single-sex schools and those in mixed schools. The outcome of the findings has been presented in Table 12.

Albeit female elective biology students in single-sex schools scored higher mean value in skills of observing (M =9.41, SD = 2.61) than those in mixed schools (M= 9.33, SD = 1.98) with t (119) =0.10, p= (0.851). No significant difference was noted in the magnitude of mean scores as shown in Table 12 thus they exhibit the same level of proficiency. Their mean ranks suggest that girls in single-sex schools outperformed girls in mixed schools in laboratory observing skills. This study agrees with another research that girls at single-sex schools did better than girls in co-educational schools [67]. The slightly higher mean score by schools could be attributed to the fact that girls’ schools tend to be highly selective in intake, have students from higher socio-economic backgrounds and generally have established traditions [68]. Again girls in single-sex schools were more likely to be associated with academically oriented peers, to do more homework and to be less stereotyped in their sex role attitudes than the students in co-educational schools [69, 70].

4. Conclusions and Recommendations

A conclusion can be drawn from the study that biology students in SHS in the Sekondi-Takoradi Metropolis of Ghana demonstrate observing skills. Female SHS students in single-sex schools in Sekondi-Takoradi Metropolis of Ghana obtained a higher mean score than their males. Even though this research was conducted with SHS 2 biology students within Sekondi-Takoradi Metropolis of the Western Region of Ghana, and given the difficulty in generalizing results and findings from performance assessments, recommendations include:

  • Students must be provided with the opportunities to do more activities at the SHS level to sharpen their observing skills.
  • Biology teachers teaching in all types of schools should consider using performance assessment as a tool to improve and sharpen their observing skills.

Author Contributions: Conceptualization, EA, EAB and AFD; methodology EA, EAB and AFD; validation; formal analysis EA, EAB and AFD; investigation EA, EAB and AFD; resources; data curation EA, EAB and AFD; writing—original draft preparation EA, EAB and AFD; writing—review and editing EA, EAB and AFD; visualization, EA, EAB and AFD; supervision EA, EAB and AFD; project administration EA, EAB and AFD; All authors have read and agreed to the published version of the manuscript.

Funding: “This research received no external funding”

Data Availability Statement: Data is available on request from the corresponding author.

Acknowledgments: I acknowledge the participants in this study.

Conflicts of Interest: “The author declares no conflict of interest.” “No funders had any role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results”.

References

  1. Djangmah, J.S. (1986). Innovative Programmes and Practice in Teacher Education and Training; Report of the National Conference on Teacher Education and its Agencies in Ghana. Cape Coast: Institute of Education.
  2. CRDD. (2010). The teaching syllabus for biology. Accra: Ghana Publishing Co-operation.
  3. Huppert, J., Lomask S.M. & Lazarorcitz, R. (2002). Computer simulations in the high school: students’ cognitive stages, science process skills and academic achievement in microbiology. International Journal of Science Education, 24(8), 803–821.[CrossRef]
  4. Bybee, R.W., Buchwald, C. E., Crissman, S., Heil, D., Kuerbis, P. J., Matsumoto C., & Mclnerney, J.D. (1989). Science and technology for the elementary years: Frame works for curriculum and instruction. Washington, D.C.: National Centre for Improving Science Education.
  5. McMillian, J. H. (2001). Classroom assessment: Principles and practice for effective instruction. Boston: Allyn and Bacon.
  6. Osman, S., Bordoh, A., & Eshun, I. (2021). Basic School Teachers’ Conceptions of Assessment in the Sissla East Municipality. International Journal of Research and Innovation in Social Science, 5(3), 311-324.[CrossRef]
  7. Eshun, I., Bordoh, A., Bassaw, T. K., & Mensah, M. F. (2014). Evaluation of social studies students’ learning using formative assessment in selected Colleges of Education in Ghana. British Journal of Education, 2(1), 39-48.
  8. Cusick, M. E., Klitgord, N., Vidal, M., & Hill, D. E. (2005). Interactome: gateway into systems biology. Human molecular genetics14(suppl_2), R171-R181.[CrossRef] [PubMed]
  9. Bordoh, A., Eshun, I., Quarshie, A. M., Bassaw, T. K., & Kwarteng, P. (2015). Social Studies Teachers’ Knowledge Base in Authentic Assessment in Selected Senior High Schools in the Central Region of Ghana. Journal of Social Sciences and Humanities, 1(3), 249-257.
  10. Nitko, E. (2004). Educational assessment for students. (4th ed.). Columbus: Merrill.
  11. Bekoe, S. O., Eshun, I., & Bordoh, A. (2013). Formative assessment techniques tutors use to assess teacher-trainees’ learning in Social Studies in Colleges of Education in Ghana. Research on Humanities and Social Sciences, 3(4), 20-30.
  12. Kankam, B., Bordoh, A., Eshun, I., Bassaw, T. K., & Fredrick Yaw Korang, F. Y. (2014). Teachers’ perception of authentic assessment techniques practice in Social Studies lessons in Senior High Schools in Ghana. International Journal of Educational Research and Information Science, 1(4), 62-68.
  13. Anthony-Krueger, C. (2001). Assessing some process skills of SSS students in elective biology. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast.
  14. Tachie Y. T. (2001). Assessing observational skills of JSS students in the Ashiedu Keteke sub-metropolis. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast.
  15. Dadzie, A. (2011). Assessing laboratory skills of selected JHS students in selected schools of Cape Coast Metropolis. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast.
  16. Afuwape, M. O., & Oludipe, D. (2008). Gender differences in integrated science achievement among pre service teachers in Nigeria. Educational Research and Review, 3 (7), 242-245.
  17. Babajide, V. F. T. (2010). Generative and Predict-Observe-Explain Instructional Strategies as Determinants of Senior Secondary School Students’ Achievement and Practical Skills in Physics. Unpublished doctoral Thesis, University of Ibadan, Nigeria.
  18. Agyei, C. A. (2012). Assessing laboratory skills of biology students in selected senior high schools. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast, Ghana.
  19. Yoloye, T. W. (2004). Increasing female participation in science: Mimeograph, University of Ibadan: Nigeria.
  20. Omoniyi, O. A. (2006). The effect of the constructivist-based teaching strategy on gender-related differences on students’ misconceptions in chemistry. Akure, Nigeria: Ministry of Education.
  21. Weaver-Hightower, M. (2003). The “boy turn” in research on gender and education. Review of Educational Research, 73 (4), 471-498.[CrossRef]
  22. Njoku, Z. C. (2002). Enhancing girls acquisition of science process skills in co-educational schools. An experience with sex grouping for practical chemistry. Journal of the Science Teacher Association of Nigeria, 37 (1 & 2).
  23. WAEC (2012). Chief examiners' reports on senior secondary school certificate examination s. Accra: Wisdom Press.
  24. WAEC (2018). Chief examiners' reports on senior secondary school certificate examination s. Accra: Wisdom Press.
  25. Cortes, K. L., Reid, J. W., Fallin, R., Hao, J., Shah, L., Ray, H. E., & Rushton, G. T. (2022). A Longitudinal Study Identifying the Characteristics and Content Knowledge of Those Seeking Certification to Teach Secondary Biology in the United States. CBE—Life Sciences Education21(4), ar63.[CrossRef] [PubMed]
  26. WAEC (2020). Chief Examiners' reports on senior secondary school certificate examination s. Accra: Wisdom Press.
  27. Bordoh, A., Nyantakyi, F., Otoo, K. A., Boakyewaa, A., Owusu-Ansah, P., & Eshun, I. (2021). Effective teaching of Social Studies concepts in Basic Schools Ghana. Universal Journal of Social Sciences and Humanities, 1(1), 46-53.[CrossRef]
  28. Bordoh, A. (2022). Teacher Trainees Use of Feedback in Assessing Student Learning in Social Studies Lessons in Basic Schools in Ghana: A Study of Selected Colleges of Education in Western and Northern Regions. Social Educator, 7(1), 27-43.
  29. Shepard, L. A. (2000). The Role of assessment in a learning culture. Education Researcher, 29, 23-28.[CrossRef]
  30. Bordoh, A., Bassaw, T. K., & Eshun, I. (2013). Social Studies tutors’ cognition in formative assessment in colleges of education in Ghana. Development Country Studies, 3(11), 1-11.
  31. Bordoh, A., Eshun, I., Ibrahim, A. W., Bassaw, T. K., Baah, A., & Yeboah, J. (2022). Technological Pedagogical Content Knowledge (TPACK) of Teachers and Their Formative Assessment Practices in Social Studies Lessons. Universal Journal of Social Sciences and Humanities, 2(4), 201–209. DOI: 10.31586/ujssh.2022.459[CrossRef]
  32. Bordoh, A., Brew, E., Otoo, K. A., Owusu-Ansah, P., & Yaw, O, E (2021). Use of Teacher’s Profile Dimensions to Assess Social Studies Student’s Learning Outcomes at The Senior High Schools in Ghana. Innovare Journal of Education, 9(4),14-21.[CrossRef]
  33. Kankam, B., Bordoh, A., Eshun, I., Bassaw, T. K., & Korang. F. Y. (2014). An investigation into authentic assessment practices of Social Studies teachers in the Senior High Schools (SHSs) in Ghana. American Journal of Social Sciences, 2 (6), 166-172.
  34. Lake, C., Harmes, P., & Guill, D. (1998). Defining assessment. Retrieved from: www.essentialschools.org/cs/resource/view/ces_rest/124
  35. Dietel, R. J., Herman, J. L., & Knuth, R. A. (1991). What does research say about assessment? New York: Longman.
  36. Bordoh, A., Bassaw, T. K., & Eshun, I. (2013). Social Studies tutors’ cognition in formative assessment in colleges of education in Ghana. Development Country Studies, 3(11), 1-11.
  37. Black, P., & William, D. (1998). Assessment and classroom learning. Assessing in Education, 5(1), 7-73.[CrossRef]
  38. Gallagher, J. D. (1998). Classroom assessment for teachers. London: Prentice Hall International.
  39. Harlen, W. (2001). The Assessment of Scientific Literacy: Research in Science Education Past, Present and Future. New York: MacMillan.[CrossRef]
  40. Gronlund, N. E. (1998). Assessment of student achievement. London: Allyn & Brown.
  41. Ellis, P., Jablonski, E., Levy, A. & Mansfield, A. (2008). High school science performance assessments: An examination of instruments for Massachusetts (Electronic Version). Boston: Education Development Center. Retrieved on June 25, 2014, from, http://www7. Nationalacademies.org/base/tom_shiland_presentation_jun_04.pdf
  42. Tamir, P. (1985). The Israeli “Bagrut” examination in biology revisited. Journal of Research in Science Teaching22(1), 31-40.[CrossRef]
  43. Moore, K. O. (1998). Classroom Teaching Skills. Boston: McGraw-Hill Companies, Inc.
  44. International Association for the Evaluation of Educational Achievement [IEA] (1995). Introduction: Performance Assessment. Retrieved on June 20, 2014, from, http://timss.bc.edu/timss1995i/TIMSSPDF/PAintro.pdf
  45. Martin, M. O., Mullis, I. V., Foy, P., & Stanco, G. M. (2012). TIMSS 2011 International results in science. International Association for the Evaluation of Educational Achievement. Herengracht 487, Amsterdam, 1017 BT, The Netherlands.
  46. Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom assessment and the national science education standards. Washington, DC: National Academies Press.
  47. Morrison, J., McDuffie, A., & Akerson, V. (2003). Pre-service teachers' development and implementation of science performance assessment task. (ERIC Document Reproduction Service No. ED478065).
  48. Pfeifer, G. R. (2002). The influence of authentic assessment tasks and authentic illustration on Lutheran Elementary School fifth and sixth grade students’ attitude towards social studies and authentic projects. Unpublished Doctoral Dissertation, University of Minnesota.
  49. Kelly, V. L. (2007). Alternative assessment strategies within a context based science teaching and learning approach in secondary school in Swaziland. Unpublished doctoral thesis. Faculty of Education, University of the Western Cape, Bellville, South Africa.
  50. Ruiz-Primo, M. A. & Shavelson, R. J. (1996). Rhetoric and reality in science performance assessments: An Update. (Electronic Version) Journal of Research in Science Teaching, 33 (10). Retrieved on June 25, 2014, from, http://www.Stanford.edu/dep/SUSE/SEAL/Reports[CrossRef]
  51. Shavelson, R. J., & Baxter, G. P. (1992). What we’ve learned about assessing hands-on science. Educational Leadership,49(8), 20-25.
  52. Kulm, G., & Malcom, S. M. (1991). Science assessment in the service of reform. New York: American Association for the Advancement of Science, Washington, DC.
  53. Radford, D. L., Ramsey, L. L., & Deese. W. C. (1995). Demonstration assessment: Measuring conceptual understanding and critical thinking with rubrics. The Science Teaching, 62(7), 52-55.
  54. Airasian, P. W. (1991). Classroom assessment. New York: McGraw-Hill.
  55. Stiggins, R. J. (1987). The design and development of performance assessments. Educational Measurements: Issues and Practice, 6, 33-42.[CrossRef]
  56. Brualdi, A. (1998). Implementing performance assessment in the classroom. Practical Assessment, Research & Evaluation, 6 (2) 1-3. Retrieved August 7, 2014, from http://PAREonline.net/given.asp?v=6
  57. Cohen, L. & Manion, L. (1991) Research methods in education (4th ed.). London: Routledge.
  58. Fowler, F. J., & Junior, M. (1993). Survey research methods (2nd ed.). Newbury Park, CA: Sage.
  59. Ary, D., Jacobs C. L., & Razavieh A. (2002). Introduction to research in education. USA: Wadsworth Thompson Learning.
  60. Flick, U. (2014). An introduction to qualitative research (5th ed). London: Sage.
  61. Creswell, J. W. (2002). Educational research: Planning, conduction and evaluation of quantitative and qualitative research. Merrill: Prentice Hall.
  62. Cohen, L., Manion, L., & Morrison, K. (2003). Research methods in education, (5th ed.). London: Routledge Falmer.[CrossRef]
  63. Best, J. W., & Kahn J. V. (1995). Research in education. New Delhi: Prentice-hall.
  64. David, M., & Sutton, C. D. (2004). Social research: The basics. London: SAGE Publications Ltd.
  65. Schwart, A., Tiedeman, S. C., & Wallace, D. G. (1962). Evaluating students’ progress in the secondary school. New York: David Mckay Company, Inc.
  66. Awortwi, S. G. (1999). Gender Issues in Science and Technology. Paper presented at the annual meeting of the Ghana Association of Science Teachers (GAST) Conference. Tema.
  67. Mellor D., Beausoleil N. (2015). Extending the ‘five domains’ model for animal welfare assessment to incorporate positive welfare states. Anim. Welf. 24:241–253. doi: 10.7120/09627286.24.3.241.[CrossRef]
  68. Elwood, J. (2006). Formative assessment: Possibilities, boundaries and limitations. Assessment in Education: Principles, Policy & Practice, 13(2), 215-232.[CrossRef]
  69. Spielhofer*, T., Benton, T., & Schagen, S. (2004). A study of the effects of school size and single‐sex education in English schools. Research papers in Education19(2), 133-159.[CrossRef]
  70. Streitmatter, J. L. (1999). For girls only: Making a case for single-sex schooling. State University of New York Press.
Article metrics
Views
736
Downloads
165

Cite This Article

APA Style
Anumel, E. , Bonney, E. A. , & Dzidzinyo, A. F. (2023). Assessing Observing Skills of Biology Students in Selected Senior High Schools. Open Journal of Educational Research, 3(2), 135-152. https://doi.org/10.31586/ojer.2023.720
ACS Style
Anumel, E. ; Bonney, E. A. ; Dzidzinyo, A. F. Assessing Observing Skills of Biology Students in Selected Senior High Schools. Open Journal of Educational Research 2023 3(2), 135-152. https://doi.org/10.31586/ojer.2023.720
Chicago/Turabian Style
Anumel, Evelyn, Ebenezer Appah Bonney, and Abigail Fiona Dzidzinyo. 2023. "Assessing Observing Skills of Biology Students in Selected Senior High Schools". Open Journal of Educational Research 3, no. 2: 135-152. https://doi.org/10.31586/ojer.2023.720
AMA Style
Anumel E, Bonney EA, Dzidzinyo AF. Assessing Observing Skills of Biology Students in Selected Senior High Schools. Open Journal of Educational Research. 2023; 3(2):135-152. https://doi.org/10.31586/ojer.2023.720
@Article{ojer720,
AUTHOR = {Anumel, Evelyn and Bonney, Ebenezer Appah and Dzidzinyo, Abigail Fiona},
TITLE = {Assessing Observing Skills of Biology Students in Selected Senior High Schools},
JOURNAL = {Open Journal of Educational Research},
VOLUME = {3},
YEAR = {2023},
NUMBER = {2},
PAGES = {135-152},
URL = {https://www.scipublications.com/journal/index.php/OJER/article/view/720},
ISSN = {2770-5552},
DOI = {10.31586/ojer.2023.720},
ABSTRACT = {The purpose of the study was to design and develop performance-based tasks to assess laboratory observing skills of biology students in senior high schools. The target population was all students in the nine schools within Sekondi-Takoradi Metropolis reading biology as an elective subject. The accessible population was 753 SHS 2 biology students in six schools. 261 students were randomly selected from each of the six schools. These schools were of three different types, single-sex males, single-sex females, and mixed. Mean, standard deviations, frequencies, and percentages were calculated while independent sample t-tests were performed. No significant difference was noticed in levels of proficiency shown for males and females in the various schools surveyed. It is recommended that students from all types of schools and both sexes must be given an opportunity to engage in more activities at the SHS level to sharpen their observing skills.},
}
%0 Journal Article
%A Anumel, Evelyn
%A Bonney, Ebenezer Appah
%A Dzidzinyo, Abigail Fiona
%D 2023
%J Open Journal of Educational Research

%@ 2770-5552
%V 3
%N 2
%P 135-152

%T Assessing Observing Skills of Biology Students in Selected Senior High Schools
%M doi:10.31586/ojer.2023.720
%U https://www.scipublications.com/journal/index.php/OJER/article/view/720
TY  - JOUR
AU  - Anumel, Evelyn
AU  - Bonney, Ebenezer Appah
AU  - Dzidzinyo, Abigail Fiona
TI  - Assessing Observing Skills of Biology Students in Selected Senior High Schools
T2  - Open Journal of Educational Research
PY  - 2023
VL  - 3
IS  - 2
SN  - 2770-5552
SP  - 135
EP  - 152
UR  - https://www.scipublications.com/journal/index.php/OJER/article/view/720
AB  - The purpose of the study was to design and develop performance-based tasks to assess laboratory observing skills of biology students in senior high schools. The target population was all students in the nine schools within Sekondi-Takoradi Metropolis reading biology as an elective subject. The accessible population was 753 SHS 2 biology students in six schools. 261 students were randomly selected from each of the six schools. These schools were of three different types, single-sex males, single-sex females, and mixed. Mean, standard deviations, frequencies, and percentages were calculated while independent sample t-tests were performed. No significant difference was noticed in levels of proficiency shown for males and females in the various schools surveyed. It is recommended that students from all types of schools and both sexes must be given an opportunity to engage in more activities at the SHS level to sharpen their observing skills.
DO  - Assessing Observing Skills of Biology Students in Selected Senior High Schools
TI  - 10.31586/ojer.2023.720
ER  - 
  1. Djangmah, J.S. (1986). Innovative Programmes and Practice in Teacher Education and Training; Report of the National Conference on Teacher Education and its Agencies in Ghana. Cape Coast: Institute of Education.
  2. CRDD. (2010). The teaching syllabus for biology. Accra: Ghana Publishing Co-operation.
  3. Huppert, J., Lomask S.M. & Lazarorcitz, R. (2002). Computer simulations in the high school: students’ cognitive stages, science process skills and academic achievement in microbiology. International Journal of Science Education, 24(8), 803–821.[CrossRef]
  4. Bybee, R.W., Buchwald, C. E., Crissman, S., Heil, D., Kuerbis, P. J., Matsumoto C., & Mclnerney, J.D. (1989). Science and technology for the elementary years: Frame works for curriculum and instruction. Washington, D.C.: National Centre for Improving Science Education.
  5. McMillian, J. H. (2001). Classroom assessment: Principles and practice for effective instruction. Boston: Allyn and Bacon.
  6. Osman, S., Bordoh, A., & Eshun, I. (2021). Basic School Teachers’ Conceptions of Assessment in the Sissla East Municipality. International Journal of Research and Innovation in Social Science, 5(3), 311-324.[CrossRef]
  7. Eshun, I., Bordoh, A., Bassaw, T. K., & Mensah, M. F. (2014). Evaluation of social studies students’ learning using formative assessment in selected Colleges of Education in Ghana. British Journal of Education, 2(1), 39-48.
  8. Cusick, M. E., Klitgord, N., Vidal, M., & Hill, D. E. (2005). Interactome: gateway into systems biology. Human molecular genetics14(suppl_2), R171-R181.[CrossRef] [PubMed]
  9. Bordoh, A., Eshun, I., Quarshie, A. M., Bassaw, T. K., & Kwarteng, P. (2015). Social Studies Teachers’ Knowledge Base in Authentic Assessment in Selected Senior High Schools in the Central Region of Ghana. Journal of Social Sciences and Humanities, 1(3), 249-257.
  10. Nitko, E. (2004). Educational assessment for students. (4th ed.). Columbus: Merrill.
  11. Bekoe, S. O., Eshun, I., & Bordoh, A. (2013). Formative assessment techniques tutors use to assess teacher-trainees’ learning in Social Studies in Colleges of Education in Ghana. Research on Humanities and Social Sciences, 3(4), 20-30.
  12. Kankam, B., Bordoh, A., Eshun, I., Bassaw, T. K., & Fredrick Yaw Korang, F. Y. (2014). Teachers’ perception of authentic assessment techniques practice in Social Studies lessons in Senior High Schools in Ghana. International Journal of Educational Research and Information Science, 1(4), 62-68.
  13. Anthony-Krueger, C. (2001). Assessing some process skills of SSS students in elective biology. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast.
  14. Tachie Y. T. (2001). Assessing observational skills of JSS students in the Ashiedu Keteke sub-metropolis. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast.
  15. Dadzie, A. (2011). Assessing laboratory skills of selected JHS students in selected schools of Cape Coast Metropolis. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast.
  16. Afuwape, M. O., & Oludipe, D. (2008). Gender differences in integrated science achievement among pre service teachers in Nigeria. Educational Research and Review, 3 (7), 242-245.
  17. Babajide, V. F. T. (2010). Generative and Predict-Observe-Explain Instructional Strategies as Determinants of Senior Secondary School Students’ Achievement and Practical Skills in Physics. Unpublished doctoral Thesis, University of Ibadan, Nigeria.
  18. Agyei, C. A. (2012). Assessing laboratory skills of biology students in selected senior high schools. Unpublished Master’s Thesis, University of Cape Coast. Cape Coast, Ghana.
  19. Yoloye, T. W. (2004). Increasing female participation in science: Mimeograph, University of Ibadan: Nigeria.
  20. Omoniyi, O. A. (2006). The effect of the constructivist-based teaching strategy on gender-related differences on students’ misconceptions in chemistry. Akure, Nigeria: Ministry of Education.
  21. Weaver-Hightower, M. (2003). The “boy turn” in research on gender and education. Review of Educational Research, 73 (4), 471-498.[CrossRef]
  22. Njoku, Z. C. (2002). Enhancing girls acquisition of science process skills in co-educational schools. An experience with sex grouping for practical chemistry. Journal of the Science Teacher Association of Nigeria, 37 (1 & 2).
  23. WAEC (2012). Chief examiners' reports on senior secondary school certificate examination s. Accra: Wisdom Press.
  24. WAEC (2018). Chief examiners' reports on senior secondary school certificate examination s. Accra: Wisdom Press.
  25. Cortes, K. L., Reid, J. W., Fallin, R., Hao, J., Shah, L., Ray, H. E., & Rushton, G. T. (2022). A Longitudinal Study Identifying the Characteristics and Content Knowledge of Those Seeking Certification to Teach Secondary Biology in the United States. CBE—Life Sciences Education21(4), ar63.[CrossRef] [PubMed]
  26. WAEC (2020). Chief Examiners' reports on senior secondary school certificate examination s. Accra: Wisdom Press.
  27. Bordoh, A., Nyantakyi, F., Otoo, K. A., Boakyewaa, A., Owusu-Ansah, P., & Eshun, I. (2021). Effective teaching of Social Studies concepts in Basic Schools Ghana. Universal Journal of Social Sciences and Humanities, 1(1), 46-53.[CrossRef]
  28. Bordoh, A. (2022). Teacher Trainees Use of Feedback in Assessing Student Learning in Social Studies Lessons in Basic Schools in Ghana: A Study of Selected Colleges of Education in Western and Northern Regions. Social Educator, 7(1), 27-43.
  29. Shepard, L. A. (2000). The Role of assessment in a learning culture. Education Researcher, 29, 23-28.[CrossRef]
  30. Bordoh, A., Bassaw, T. K., & Eshun, I. (2013). Social Studies tutors’ cognition in formative assessment in colleges of education in Ghana. Development Country Studies, 3(11), 1-11.
  31. Bordoh, A., Eshun, I., Ibrahim, A. W., Bassaw, T. K., Baah, A., & Yeboah, J. (2022). Technological Pedagogical Content Knowledge (TPACK) of Teachers and Their Formative Assessment Practices in Social Studies Lessons. Universal Journal of Social Sciences and Humanities, 2(4), 201–209. DOI: 10.31586/ujssh.2022.459[CrossRef]
  32. Bordoh, A., Brew, E., Otoo, K. A., Owusu-Ansah, P., & Yaw, O, E (2021). Use of Teacher’s Profile Dimensions to Assess Social Studies Student’s Learning Outcomes at The Senior High Schools in Ghana. Innovare Journal of Education, 9(4),14-21.[CrossRef]
  33. Kankam, B., Bordoh, A., Eshun, I., Bassaw, T. K., & Korang. F. Y. (2014). An investigation into authentic assessment practices of Social Studies teachers in the Senior High Schools (SHSs) in Ghana. American Journal of Social Sciences, 2 (6), 166-172.
  34. Lake, C., Harmes, P., & Guill, D. (1998). Defining assessment. Retrieved from: www.essentialschools.org/cs/resource/view/ces_rest/124
  35. Dietel, R. J., Herman, J. L., & Knuth, R. A. (1991). What does research say about assessment? New York: Longman.
  36. Bordoh, A., Bassaw, T. K., & Eshun, I. (2013). Social Studies tutors’ cognition in formative assessment in colleges of education in Ghana. Development Country Studies, 3(11), 1-11.
  37. Black, P., & William, D. (1998). Assessment and classroom learning. Assessing in Education, 5(1), 7-73.[CrossRef]
  38. Gallagher, J. D. (1998). Classroom assessment for teachers. London: Prentice Hall International.
  39. Harlen, W. (2001). The Assessment of Scientific Literacy: Research in Science Education Past, Present and Future. New York: MacMillan.[CrossRef]
  40. Gronlund, N. E. (1998). Assessment of student achievement. London: Allyn & Brown.
  41. Ellis, P., Jablonski, E., Levy, A. & Mansfield, A. (2008). High school science performance assessments: An examination of instruments for Massachusetts (Electronic Version). Boston: Education Development Center. Retrieved on June 25, 2014, from, http://www7. Nationalacademies.org/base/tom_shiland_presentation_jun_04.pdf
  42. Tamir, P. (1985). The Israeli “Bagrut” examination in biology revisited. Journal of Research in Science Teaching22(1), 31-40.[CrossRef]
  43. Moore, K. O. (1998). Classroom Teaching Skills. Boston: McGraw-Hill Companies, Inc.
  44. International Association for the Evaluation of Educational Achievement [IEA] (1995). Introduction: Performance Assessment. Retrieved on June 20, 2014, from, http://timss.bc.edu/timss1995i/TIMSSPDF/PAintro.pdf
  45. Martin, M. O., Mullis, I. V., Foy, P., & Stanco, G. M. (2012). TIMSS 2011 International results in science. International Association for the Evaluation of Educational Achievement. Herengracht 487, Amsterdam, 1017 BT, The Netherlands.
  46. Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom assessment and the national science education standards. Washington, DC: National Academies Press.
  47. Morrison, J., McDuffie, A., & Akerson, V. (2003). Pre-service teachers' development and implementation of science performance assessment task. (ERIC Document Reproduction Service No. ED478065).
  48. Pfeifer, G. R. (2002). The influence of authentic assessment tasks and authentic illustration on Lutheran Elementary School fifth and sixth grade students’ attitude towards social studies and authentic projects. Unpublished Doctoral Dissertation, University of Minnesota.
  49. Kelly, V. L. (2007). Alternative assessment strategies within a context based science teaching and learning approach in secondary school in Swaziland. Unpublished doctoral thesis. Faculty of Education, University of the Western Cape, Bellville, South Africa.
  50. Ruiz-Primo, M. A. & Shavelson, R. J. (1996). Rhetoric and reality in science performance assessments: An Update. (Electronic Version) Journal of Research in Science Teaching, 33 (10). Retrieved on June 25, 2014, from, http://www.Stanford.edu/dep/SUSE/SEAL/Reports[CrossRef]
  51. Shavelson, R. J., & Baxter, G. P. (1992). What we’ve learned about assessing hands-on science. Educational Leadership,49(8), 20-25.
  52. Kulm, G., & Malcom, S. M. (1991). Science assessment in the service of reform. New York: American Association for the Advancement of Science, Washington, DC.
  53. Radford, D. L., Ramsey, L. L., & Deese. W. C. (1995). Demonstration assessment: Measuring conceptual understanding and critical thinking with rubrics. The Science Teaching, 62(7), 52-55.
  54. Airasian, P. W. (1991). Classroom assessment. New York: McGraw-Hill.
  55. Stiggins, R. J. (1987). The design and development of performance assessments. Educational Measurements: Issues and Practice, 6, 33-42.[CrossRef]
  56. Brualdi, A. (1998). Implementing performance assessment in the classroom. Practical Assessment, Research & Evaluation, 6 (2) 1-3. Retrieved August 7, 2014, from http://PAREonline.net/given.asp?v=6
  57. Cohen, L. & Manion, L. (1991) Research methods in education (4th ed.). London: Routledge.
  58. Fowler, F. J., & Junior, M. (1993). Survey research methods (2nd ed.). Newbury Park, CA: Sage.
  59. Ary, D., Jacobs C. L., & Razavieh A. (2002). Introduction to research in education. USA: Wadsworth Thompson Learning.
  60. Flick, U. (2014). An introduction to qualitative research (5th ed). London: Sage.
  61. Creswell, J. W. (2002). Educational research: Planning, conduction and evaluation of quantitative and qualitative research. Merrill: Prentice Hall.
  62. Cohen, L., Manion, L., & Morrison, K. (2003). Research methods in education, (5th ed.). London: Routledge Falmer.[CrossRef]
  63. Best, J. W., & Kahn J. V. (1995). Research in education. New Delhi: Prentice-hall.
  64. David, M., & Sutton, C. D. (2004). Social research: The basics. London: SAGE Publications Ltd.
  65. Schwart, A., Tiedeman, S. C., & Wallace, D. G. (1962). Evaluating students’ progress in the secondary school. New York: David Mckay Company, Inc.
  66. Awortwi, S. G. (1999). Gender Issues in Science and Technology. Paper presented at the annual meeting of the Ghana Association of Science Teachers (GAST) Conference. Tema.
  67. Mellor D., Beausoleil N. (2015). Extending the ‘five domains’ model for animal welfare assessment to incorporate positive welfare states. Anim. Welf. 24:241–253. doi: 10.7120/09627286.24.3.241.[CrossRef]
  68. Elwood, J. (2006). Formative assessment: Possibilities, boundaries and limitations. Assessment in Education: Principles, Policy & Practice, 13(2), 215-232.[CrossRef]
  69. Spielhofer*, T., Benton, T., & Schagen, S. (2004). A study of the effects of school size and single‐sex education in English schools. Research papers in Education19(2), 133-159.[CrossRef]
  70. Streitmatter, J. L. (1999). For girls only: Making a case for single-sex schooling. State University of New York Press.