Browsing articles tagged with " assessment"
Feb 22, 2012

TEI 2012 GSC Presentation Reflection


I gave my presented my project from last semester, SnapToTrace, at the TEI 2012 Graduate Student Consortium on Sunday. I must say, it was somewhat of a dissonant experience to present after spending the last six months building a body of research on the foundation of its failings (though also a great exercise). However, it was also validating to hear criticism of this initial approach and specifically the delivery of my concept. It allowed me to see the strengths of my thesis as it currently stands and also the weaknesses where I need to focus, specifically on evaluation and assessment. Our GSC mentors just happen to have an expertise in this area and offered some invaluable advice moving forward with assessment “short cuts” that are more time sensitive and in some ways more applicable.

1) Case studies that are microanalyses of a particular set of behaviors, attitudes, and actions. I don’t think this type of pinpointed generalization is the best option in working to design inclusively for teachers of different backgrounds and who are teaching different levels of students. The variables are to many.

2) Only a post evaluation guided by a coding scheme that examines changes in terminology, attitude, and behavior.

3) Originality of ideas: if one of my main goals is to evaluate how effective this approach is in allowing educators to reframe concepts based on the materials and the context (design, collaboration, creativity, making), then one of my metrics could center on the intersection of different content domains and the ideas generated surrounding

The other major distinction to make is the difference between evaluating the toolkit and the learning that emerges from the toolkit. To do this, my initial thoughts are to implement a short pre-evaluation form and a post-evaluation in the form of video interview. The latter will use a qualitative coding scheme that is aligned with the initial data gather in the pre-evaluation. The video evaluation will also gauge the originality of ideas. Here is my light reading for the trek home from Creativity and Cognition 2009:

Creativity Factor Evaluation: Towards a Standardized Survey Metric for Creativity Support
Erin A. Carroll and Celine Latulipe (UNC Charlotte)
Richard Fung and Michael Terry (University of Waterloo)

I think this would actually be super helpful for a lot of us and will post to the minigroup – for instance, flow is referenced frequently as a major foundation for their framework (from a cursory glance at least).

More to come soon!

Dec 22, 2011

IRB (Very Rough) Draft

Here is my initial version of the IRB, albeit without any of the participant forms, questionnaires, etc. All to come over winter break!

Oct 3, 2011

Bloom’s Taxonomy

I found this diagram of Bloom’s Taxonomy a few years back and still think it is the best hierarchy and explanation I have seen:

For those of you who aren’t familiar, a group of educators created Bloom’s Taxonomy in the 1950′s to classify and standardize scaffolding learning objectives:

It refers to a classification of the different objectives that educators set for students (learning objectives). Bloom’s Taxonomy divides educational objectives into three “domains”: Cognitive, Affective, and Psychomotor (sometimes loosely described as knowing/head, feeling/heart and doing/hands respectively). Within the domains, learning at the higher levels is dependent on having attained prerequisite knowledge and skills at lower levels. A goal of Bloom’s Taxonomy is to motivate educators to focus on all three domains, creating a more holistic form of education.

It is still very much in use and considered a backbone of sorts to curriculum development and assessment. Bloom’s will be an essential reference for contextualizing my thesis in formal and informal learning environments (though I think I am sticking with the latter) and for evaluating the growth of participant-learners.