A&HHE Special Issue August 2016AHHELogo-300x300

Rich feedback and assessment environment in a horn studio: practising scales

Julius Pranevicius

Norwegian Academy of Music


It is clear that that the quality of students’ practice is dependent both on the time spent on the task as well as the strategies used in the process. At the same time, assessing these aspects in individual lessons is not always easy. This paper outlines an approach I devised where some students submitted recordings of their scales practice by email in between lessons. Concise feedback messages were emailed back to the students shortly after each submission. This approach seems to have been fruitful. The framework of seven good-feedback principles (Nicol and Macfarlane‐Dick, 2006) is used to reflect upon the feedback. The development of a peer review-based feedback platform is proposed as a next step in the project. Such a platform would add an additional aspect of peer learning, encouraging constructive and efficient feedback processes among peers as well as between student and teacher.


Horn, scales, music, practice, assessment, feedback, self-regulation.


As a horn teacher, I used to devote part of an individual lesson to playing scales. I discovered, however, that some students kept coming underprepared and precious minutes were wasted in simply doing the practising that the students should have been doing independently. To improve our efficiency I therefore organised a weekly scale class with all my students instead. Having to play in front of peers, I thought, would increase motivation to prepare well. But this is not what happened – some students still were underprepared! Occasional student absence would also result in a missed opportunity to play that weeks’ assignment and get feedback on it. Furthermore, as no records were being kept, no one knew how far the students had progressed. The opportunity to participate in a development project within CEMPE (Centre of Excellence in Music Performance Education) at the Norwegian Academy of Music in Oslo provided some resources to explore possibilities.

Constructive alignment and the principles of good feedback

Biggs et al’s. (2011) framework of “constructive alignment” provided directions for the design of the new learning task. The framework advocates aligning learning activities and assessment tasks with the intended learning outcomes (ILOs) to encourage deep learning approaches. Determining ILOs was key to the initial phase of the project, and proved rather difficult. A written ILO should both clarify what a student will need to have achieved and make evident what has to be done in order to attain this (Biggs et al., 2011: 119). One such ILO for the project that underpins the basis of this paper is for the students to “Record and evaluate their own playing”. This entailed a student recording a short piece of musical material, sending it to me by email together with a self-assessment

message, to which I responded with my feedback. I formatively assessed the quality of the recording and level of reflection demonstrated in the self-assessment.

In addition to “constructive alignment” in this article I use the framework of seven principles of good feedback practice encouraging self-regulation as proposed by Nicol and Macfarlane‐Dick (2006). According to these authors, good quality feedback “helps students take action to reduce discrepancy between their intentions and the resulting effects”. Furthermore, in relation to intentions, or learning goals, they suggest that feedback should help clarify these, over time, in addition to providing guidance in how they may be attained. On the other hand, research also suggests that feedback messages can in fact be difficult for students to decipher, and in order to be able to act on feedback students need to shape their understanding of what is expected of them through discussion and negotiation with their teachers (Higgins et al., 2001). Seeing students as active feedback consumers who occupy “a central […] role in all feedback processes” is the key of Nicol and Macfarlane‐Dick’s (2006) model, and they suggest dialogue (as opposed to unidirectional information transfer) as a more constructive metaphor for feedback. Students should be allowed to respond to feedback to complete the feedback loop (Sadler, 1989), as “unless students are able to use the feedback to produce improved work, through for example, re-doing the same assignment, neither they nor those giving the feedback will know that it has been effective” (Boud, 2000: 158).

The frequency and regularity of feedback enables students to monitor and self-regulate their progress better (Gibbs and Simpson, 2004). In my experience, regular feedback also contributes to fostering the students’ incremental view of

their ability. Positive motivational beliefs and self-esteem also play an important role in students’ learning (Dweck, 1999).

Encouraging self-assessment skills seems to be in line with good feedback practices and is an important tool for self-regulation (Nicol and Macfarlane‐Dick, 2006). Self-assessment and the students’ reaction to feedback gives teachers insight into the way the students think, which in turn can be used to shape and improve teaching (Nicol and Macfarlane‐Dick, 2006).

In addition, Nicol and Macfarlane‐Dick (2006) also mention the benefits of peer learning:

* peers can often better explain things they just learned (Boyle and Nicol, 2003)

* variety of alternative perspectives on problems

* peer-assessment skills can be transferred to self-assessment

* increased motivation (Boyle and Nicol, 2003)

* students might be more receptive to critiques from peers

To summarize then, Nicol and Macfarlane-Dick (2006) summarise effective feedback as follows:

1. helps clarify what good performance is;

2. facilitates the development of self-assessment (reflection) in learning;

3. delivers high quality information to students about their learning;

4. encourages teacher and peer dialogue around learning;

5. encourages positive motivational beliefs and self-esteem;

6. provides opportunities to close the gap between current and desired performance;

7. provides information to teachers that can be used to help shape teaching


The project took place as a sub-project of Teaching of Practicing within CEMPE at the Norwegian Academy of Music. One student was assigned the recording tasks for the full duration of the project (4 months). Several other students were involved for shorter periods.

The learning task consisted of recording a piece of musical material and emailing it to me as the teacher together with a self-assessment message. I then assessed it, giving feedback on how well the student had performed and how improvement might be made. In cases where the recording lacked expected quality, I might also ask for the task to be repeated.

Since the start of the project, the nature of the task and the feedback process has adapted in several ways:

* Content: etudes at first, then scales, with some orchestral excerpts along the way;

* Frequency of submission;

* Length of the submission;

* Inclusion of self-assessment messages from the students;

* The choice of words used for feedback and how these were negotiated collaboratively with the students.

The version used most frequently was that of daily submitted scales with self-assessment messages.

The project goals were to encourage the students to take more responsibility for, and ownership of, their learning by finding/developing easy ways for them to monitor, assess and improve their own performance.


Accomplishing the recording task requires not only good instrumental skills, but also efficient strategies for recording the material (how many takes, how much to practice in advance, when to record) and technical skill in using the recording equipment and managing task submission, all of which make the overall assignment rather complex. The students’ working habits and the way they dealt with task’s challenges were illuminated by the way they submitted their recordings. For example, lack of planning was apparent with some students, and weekly submissions were replaced daily ones to support the planning process. On the other hand, some students referred to challenges in tackling the task’s complexity (after failing to submit their recordings regularly). These issues (that interestingly I rarely discussed with my students prior to the project) were addressed during the individual lessons. This new kind of information from student to teacher (that of recorded material and self-assessment messages) enriched my appreciation of the students’ thinking and helped shape my teaching.

Through engagement with the task, the students produced learning artefacts in the form of a recordings that could be studied and analysed, promoting listening and reflection skills. I found this to be particularly helpful, as live performances tend to “evaporate” making it difficult to refer to a particular spot of the performance in the discussion.

A practical benefit of the recording task is that it ensures that a student uses some sort of recording equipment. Most students have several recording devices at their disposal in the forms of smart phones and laptops. However, few of them are used, and rarely in a systematic way.

One aspect where the previous approach of weekly scale sessions had failed at was providing opportunities for closing the gap between current and desired performance. The advantage of daily submission and email was that students could re-submit assignments easily due to email’s informal and asynchronous nature (with no need for the teacher to be present). The feedback loop was thus closed, and learning resulting from specific feedback could be assessed.

Due to the size (length) of assignments and the chosen format, submission frequency and regularity became inherent properties of the process. It could be said that it was not only how well the students had mastered the scales that was assessed, but also the process of daily practice. As the frequency of feedback was freed up from the limitations of weekly teacher-student interactions, teaching and learning become richer, more collaborative and intertwined. Learning as a continuous process rather than a weekly occurrence was emphasised. Given the asynchronous nature of the setup used, face-to-face interaction with all its benefits ceased to be the exclusive mode of communication. Feedback became poly-synchronous and poly-modal.

From the beginning of the project it became apparent that not all feedback messages worked equally well, and that some were difficult for the students to decipher. On one occasion one of the students remarked that some feedback inhibited good performance, rather than provided guidance for improvement. Words like “flawless” and “without mistakes” caused tension and anxiety. These

issues were addressed through mutual negotiation during individual lessons, where it was easier to elaborate, and to clarify misconceptions. This led to an agreement of having one day a week without having to submit any recordings and a more careful choice of words on my part in giving feedback to avoid unnecessary tension. The student on her part acknowledged the importance of establishing regular daily practice routines. Deliberately short feedback messages seemed to prompt students to engage in discussions on the meaning of those messages. Having the messages in a written form also gave a point of reference for the discussions.

The limitations of the technological setup I had in these stages of the project did not enable peers to be involved in the feedback process. A possible solution to this issue, however, is discussed in the last section of this paper.

A vision for a technological platform

A review of existing technological solutions was not fruitful, as many products are not very user-friendly and not primarily designed for this kind of task. User-friendliness and ease-of-use however were clearly going to be crucial to the embedding the project on a larger scale and evolving it to include peer feedback.

A visual representation of a platform that could be developed to facilitate and manage rich feedback and assessment environments in music performance studies is presented in Figure 1. There are three key technical elements in this platform: the submission mechanism (“record”), collection and organisation mechanisms for the submitted recordings (“collect”), and feedback and assessment mechanisms that set the stage for interaction between the student, teacher(s) and peers and subsequent reflection (“reflect”).

The wish to develop a specifically designed system stems from a common criticism of the traditional virtual learning environments (VLE) as being monolithic and inflexible (Wells et al., 2013). The key aspect of the proposed system would be its modularity and loose coupling of the components. As many of the students are already using many services that provide functionality similar to the one needed for the subparts of my proposed system, the goal would be to bring those services into one system that provides an effortless and user-friendly experience.

pran fig 1

Figure 1. Rich feedback and assessment platform


Constant feedback in etudes and scales, and guidance on how improvement may be achieved in this project completed a feedback loop, resulting in deeper understanding and better learning. I believe that the method is transferrable to the kinds of musical material other than scales, as well as other instruments. The implementation of the recording task seems to fit neatly into the framework of good feedback practices encouraging students’ self-regulation. Implementing an integrated platform would allow for a larger number of participants to be

involved, and for increased feedback and the development of understanding through peer discussion.


I would like to thank CEMPE for the opportunity to participate in the “Teaching of Practice” project and Professor Harald Jørgensen for giving his valuable feedback and support.


Biggs JB, Tang CS and Society for Research into Higher Education (2011) Teaching for quality learning at university what the student does. Maidenhead: McGraw-Hill/Society for Research into Higher Education/Open University Press.

Boud D (2000) Sustainable Assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.

Boyle JT and Nicol DJ (2003) Using classroom communication systems to support interaction and discussion in large class settings. Research in Learning Technology, 11(3).

Dweck CS (1999) Self-theories: their role in motivation, personality, and development. Philadelphia, PA: Psychology Press.

Gibbs G and Simpson C (2004) Conditions under which assessment supports students’ learning. Learning and teaching in higher education, 1(1), 3–31.

Higgins R, Hartley P and Skelton A (2001) Getting the Message Across: The problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269–274.

Nicol DJ and Macfarlane‐Dick D (2006) Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

Sadler DR (1989) Formative assessment and the design of instructional systems. Instructional science, 18(2), 119–144.

Wells M, Lefevre D and Begklis F (2013) Innovation via a Thin LMS: A middleware alternative to the traditional learning management system. In: 30th Ascilite Conference. Macquarie University, Australia.