Accreditation in distance learning

Pedagogical criteria
WI3 ÷ 7 – v3.1 (Poli)

13th October 2004

Master level (MBA and master of science)

Vocational and initial training

version / work item / responsible / date / changes made
v1.0 / WI3,4,5,6,7 / Poli / 11th June
v 2.0 / WI3,4,5,6,7 / Poli / 16th Sept / Rielaboration of paragraphs, contents and relevant Work Items
Introduction of the shared format by Altran

NOTES:

a)  Our accrediting approach takes Pedagogical Objectives as the key factor for evaluating the whole course programme. Any issues (activities, learning resources, technical tools, etc.) are evaluated with regard to their coherence and effectiveness with purposed didactical objectives. This choice is born from the consideration that “no specific didactic approach can be considered as an absolute reference model”. That’s, any evaluation of the didactical approach employed in a course programme is arbitrary. In fact, neither scientific references nor literacy specify the “best model” of distance learning. In the present, it’s just possible to find scientific references about how to design a good course step by step. Meanwhile, it’s shared assumption that the validity and effectiveness of a pedagogical approach is to be evaluated with regard to the specific didactical context and objectives.

b)  The work is complex. We needed to discuss a lot - and we need working together much longer - about:

i.  what criteria are relevant for us

ii.  with what level of specification

iii.  what kind of relevant information may be easily measurable, monitored, detected and evaluated in our accrediting process

The complexity of the discussion, therefore, depends much on the fact that is difficult to make explicit what criteria are effectively linked to an evaluation based on principle of coherency and effectiveness and not based on debatable didactical models of reference.

c) As first result of our discussion, Work Item from 3 to 7 has been re-organised and articulated differently, according with the assumption above explained.


INDEX OF SECTION 2 – PEDAGOGICAL CRITERIA

2.1 Pedagogical Design 3

2.1.1 Introduction 3

2.1.2 Requirements 3

2.1.3 Didactical Objectives 4

2.1.4 Didactical Environment 5

2.1.5 Communication issues 5

2.1.6 Documentation to be delivered 5

2.1.7 Process Summary 5

2.2 Technological facilities 6

2.2.1 Introduction 6

2.2.2 Technological Features Map 6

2.2.3 Technical Analysis of Distance Learning Management System 6

2.2.4 Documentation to be delivered 6

2.2.5 Process Summary 7

2.3 Monitoring and Evaluation 8

2.3.1 Introduction 8

2.3.2 Monitoring and evaluation of learning 8

2.3.3 Monitoring and evaluation of student satisfaction 8

2.3.4 Monitoring and evaluation of didactical usefulness of learning tools employed 9

2.3.5 Documentation to be delivered 10

2.3.6 Process Summary 10

Enclosure 1 – Comments on paragraph 2.2 and 2.2.2 11

The Technological Features Map 11

Introduction 11

Explanation of grid’s items 11

Features 12

Characteristics 12

Hi / Med / Low 12

Notes 12

Note about users test 13

Enclosure 1 – Comments on paragraph 2.2 and 2.2.2

The Technological Features Map

Introduction

In this section of the document you’ll find a table which has been designed as a practical tool for the assessor who’s in charge of evaluating the so called LCMS / e-learning platform.[1]

The aim of this short introduction is to explain and justify the ideas of DLAE staff about approaching this part of the evaluation process.

As everyone knows, in the moment we are writing, there is a load of different products with similar functions and features, developed by small and big companies, institutions and also by single experts of e-learning.

In the last years, a lot of efforts have been made to compare, evaluate and judge all these different LCMS.

In our process we did not collect all the existing platforms, nor we took ‘the most common ones’ and compare them point by point. We already do this during our day to day work, every time we choose or develop a product for an e-learning course.[2]

We do believe in the fact that a book like this is not the right tool for a traditional comparison who can say: in the Platform A you can have forum, chat, file sharing, and in Platform B you have video, chat, and so on.

A multimedia and interactive website, updatable every week, maybe better (see for instance the very interesting EduTools website at http://www.edutools.info/course/compare/index.jsp).

Therefore, we wanted somehow to enter in the assessor’s shoes and so we imagine him not as an expert of the whole e-learning process. He can’t be. Obviously, we need a staff of evaluators (see the part XX), but we imagine just one (or at least two) of them moving to the evaluated company/institution to do the interviews and to see what happens face to face.

In a case like this, the assessor needs an easy-to-use tool who can be at the same time exhaustive as it can[3] and helpful.

Explanation of grid’s items

Features / Characteristic / Hi / Med / Low / Note

Features

The table was designed by a group of very different people: we involved engineers, system administrators, didactical experts, teachers, content managers and graphic designers. It was really interesting to see all the different approaches and points of view fighting and finding common paths and fields to mainly define what are we talking about.

The most critic element of the work is, in our mind, the object: the object and its name, which, referring to the table is in the first column, in the brown lines.

That’s why we could stop only on the term ‘features’, which is the most generic and, for instance, in some language as Italian has a very wide range of translations.

In our scheme, feature means “something that’s present or not” in a platform.[4] So, in the same column you’ll find very different things as forum and audio, where forum is a service and audio is a media.

We are aware of the critics that a classification like this may receive, for many reason, from the aforementioned lack of homogeneity to the fact that things like live sessions includes other elements which are in the same column. But, for a practical aim, we warily decided to separate the whole and its sets and consider the sub-characteristic of each one separately.

So, here we are at the second column.

Characteristics

Here too, the homogeneity leaves space to the attempt of considering as much parameters as possible of the feature we’re talking about.

The answers we need when we, i.e., are talking about the video are the replies to the following questions:

- when a video can be considered as good?

- how can I measure this goodness?

- and good for what use?”

Obviously, every different kind of object needs different parameters of comparison: you cannot compare three chats in terms of kilohertz. It’s also useless to keep every possibilities for each parameter (24 frames per second, 23, 22, and so on).

That’s why we decided to keep only three different ‘degrees of goodness’.

Hi / Med / Low

At the beginning, the fist step was just to say, for every characteristic, if it’s present or not. In this way, the table could be just made of checkboxes meaning nothing more than YES or NO, as in a binary system.

On the other side, as mentioned above, other features has some kind of characteristic that can be almost infinite to consider in every possibilities. You can think about the moderation in a forum, where you can have a lot of different kind of filters and controls (i.e. the moderator read every message then send it to the forum, he can filter only few users, he can just send messages to ‘calm’ the rude users, or just be present or not…).

The adopted solution is to keep only three degrees: high, medium and low, again using terms with a very wide meaning and use, so can be as good for FPS as for number of colours.

The last question of the three above, “and good for what use?” found its answer in the ROWS paragraph.

Notes

These are not just additional explanation for the single characteristic mentioned.

As everyone can imagine, the evaluation of how ‘good’ can be a chat, a video or a whiteboard means nothing if you try to do it in a ‘superlative’ perspective. The evaluation must be relative, but relative to what? To the use you make of it? To the kind of course you’re talking about?

Our choose is to draw evaluation tools that put in relationship the features and their characteristic with a didactical environment.[5]

For instance, video quality is measured in frame rate (usually from 24 fps to 4): but a perfect TV quality video transmission is what I need when I’m just doing an on-line lecture of math? In this case, video is important just to see the face of the teacher, and to get an idea of him, of his expressions and gestures.

So, in which case do I need a very good quality? Maybe in matters where the practical things to teach are important to be shown and seen, e.g. in a course for surgeons...

Therefore, the notes are explaining these connection with examples of real use of the features.

Note about users test

This is just a small first step to introduce a wider consideration about every kind of courses evaluations:

E-learning develops itself in the innovative field of the digital communication. Evaluations have to be so flexible to be favourable to innovation and not to stop in all directions: technical and methodological.

It is very difficult, and may be useless, define once for all that a feature (or a way to develop it) is good or not: users, use conditions, technologies, methodologies to be developed in the next future may be so various that any statement based on a formal metric system may be true in a specific e-learning context and wrong in an another. So we think that a reliable evaluation of an e-learning course has to be developed through two steps:

·  a previous analytical enquire that allows the evaluator to understand in the deep how the course is organized and to remark the critical characteristics;

·  a system of tests that will involve all main kinds of potential users (teachers, students and tutors and so on), planned on the basis of the “critical points” risen from the previous analysis.

Final users have to test it and under the supervising of a staff of assessors and evaluators you can get back a true idea of what’s working and what has to be changed.

In our DLAE case, the problem therefore is:

- control that before that a course was realized the designers did user tests, so control the relative documentation and results;

- if those tests were not made before, make interviews with the involved staff (teachers, students…) to keep their opinions and their own evaluations.

9

Page 9 of 15

Figure 1 - The Techncological Features Map

Features / Characteristic / Hi / Med / Low / Note / Certificate
FORUM
message clustering / -  possibility to move easily though the threads
-  subdivision of the arguments on more levels
-  immediate visualization of the argument tree (number of arguments and answers) / perception of “where you are” / normal list of messages / Threaded discussion forums can be organized into categories so that the exchange of messages and responses are grouped together and are easy to find
file attachments / many attach per message / one attach per message / no attachments
via HTTP/NNTP / both / only HTTP or NNTP / the forum messages can be checked in the user’s client using protocol NNTP or by browsers through specific tools.
moderation / every message (as a filter) / applied on the list of users / no moderation / the forum can be managed from moderators
export / saving capabilities / Yes / only save / no / possibility to recover the historian of the forum
possibility of modify submitted messages / re-editing and deleting / only deleting / no
EMAIL
reachability of tutors and teachers / day to day / 1 week / random / measured in time of reaction of teachers and tutors / recording for eventual documentation (quality processes)
email for every student / Yes / no / every user have an assigned personal mail account
email for every teacher / Yes / no / every teacher or tutor have an assigned personal mail account
via HTTP, SMTP, IMAP / All protocol / SMTP and HTTP protocol / Only SMTP protocol / SMTP and IMAP protocols allow to manage the mail on just client
MAILING LIST (See also “MAIL”)
archive / yes / no
moderation / every message (as a filter) / applied on the list of users / no moderation / activity of moderation and filter of the mail
CHAT
Selective chat / public and private chat / Public and private only with moderator / only public / some chat comprise the possibility to send to messages to single users and functionality of Instant messaging (ICQ)
export / saving capabilities / Yes / Only save / No / possibility to recover the historian of the forum
History recover during the live session / Yes / No / possibility to recover the historian of the forum only in the live session (for example ICQ)