Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exams: Create them, take them, view reports #12111

Conversation

nucleogenesis
Copy link
Member

Summary

I've tried my best to sort of fold together the existing data architecture code for Coach quiz reports and Learner quiz reports with the updated data structures.

I've kept front-end architecture shaped the same, but updated the shape of the data, basically testing out the reports and such, finding bugs, then fixing them. The changes were in relatively self-contained / purpose-built modules -- that is to say that I've tried to be sure that there are no chances of regressions.

The exam.utils module has been significantly updated. Now there are functions going from each data_model_version to the next (ie, v0-v1, v1-v2, etc).

There appear to be two things needed from this module at this time:

  • Convert an exam to the latest version
  • Fetch all of the exercises for the exam

Now there is a general function for the first part and the second function does the conversion and fetches the nodes.

References

Closes #12097

Reviewer guidance

Backward compatibility

A big part of this change involves ensuring backward compatibility. There are automated tests that do this, but I've not tested with any "data model version" below 2.

Seems like to get an exam < v2 you will need to create a quiz using Kolibri <=0.11 as V2 seems to have been added into 0.12.

  • Create like 5 quizzes in Kolibri <= 0.11
  • Take a few of the quizzes with multiple learners (but also leave at least one untouched) -- also, be sure to get some wrong for a couple users
  • Start this PR's assets on the same KOLIBRI_HOME (stopping the old one first)

Here you should be able to do the following:

  • View exam reports for all exams that have been taken, for all users
  • View specific questions' reports in Coach
  • View "difficult questions" list in Coach reports
  • View quiz report as the learner
  • Take the quiz if it has not been taken previously by a user

@github-actions github-actions bot added APP: Coach Re: Coach App (lessons, quizzes, groups, reports, etc.) DEV: frontend labels Apr 26, 2024
@github-actions github-actions bot added APP: Learn Re: Learn App (content, quizzes, lessons, etc.) SIZE: large labels Apr 29, 2024
Copy link
Member

@rtibbles rtibbles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few questions from a first read through - I may be wrong, but would like more assurance as to why I am wrong :)

@@ -15,7 +14,7 @@ import { ExamResource, ContentNodeResource } from 'kolibri.resources';
* @returns {array} - pseudo-randomized list of question objects compatible with v1 like:
* { exercise_id, question_id }
*/
function convertExamQuestionSourcesV0V2(questionSources, seed, questionIds) {
function convertExamQuestionSourcesV0V1(questionSources, seed, questionIds) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function name has been changed, but the output of this function hasn't changed at all, so what justifies the name change here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I see in the JSDoc, it seems that the purpouse was converting v0 to v1, so probably the function had a wrong name?

I looked for convertExamQuestionSourcesV0V2 matches in the code, and I found this in a comment: https://github.com/AlexVelezLl/kolibri/blob/366b38342aa43930ce1c0f2a2c9b695258b40841/kolibri/core/exams/models.py#L134. I think it would be valuable also update this name there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just realized that this function indeed returns a V2 structure, since it includes the counter_in_exercise prop.

@@ -63,16 +62,14 @@ function convertExamQuestionSourcesV0V2(questionSources, seed, questionIds) {

function convertExamQuestionSourcesV1V2(questionSources) {
// In case a V1 quiz already has this with the old name, rename it
if (every(questionSources, 'counterInExercise')) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand the purpose of changing these functions - is there a bug here we are fixing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I read this I thought it might miss a case where question sources has some camelCase but not every ... so naturally I overthought it. Pushed a change where it'll just use some instead of every which seems reasonable unless we're sure it's impossible to have both types in the same sources list.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexVelezLl this also does away with the copy / reference issue altogether using the for loop to update things in place

resolve(exam);
}
}).then(exam => {
if (data_model_version <= 1) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surely after each step here, we would need to update the data_model_version, otherwise we will update to v2 and no further, if the data_model_version is 1?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The subsequent if-statements will run too because 1 <= 2 == true then the next because 1 <= 3, etc.

If we updated the data_model_version I don't think that would impact how this works, though, and the checks could be made to be === checks rather than relying on this fall-through approach.

I think that my reasoning for not updating the data_model_version was thinking the data on the front-end should reflect what is persisted to the DB in this case -- but thinking a bit more now it does make sense to have that particular value reflect the structure of the data as it is now as it is passed around the front-end.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - I think my question here was connected to the v0v2 function - which converts from v0v2 (not v0 to v1, I think) and so as you correctly point out it got passed along here, we would be passing v2 stuff into the v1 to v2 because it had already been converted (although now, I do understand your desire to make each function do a single version leap).

@@ -447,14 +443,14 @@ describe('exam utils', () => {
title: 'Count with small numbers',
},
];
expect(converted).toEqual(
expect(converted.question_sources[0].questions).toEqual(
expectedOutput.map(q => {
q.item = `${q.exercise_id}:${q.question_id}`;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the composite id we were using previously is being set as item on the questions by our functions here, is this not being used in the coach reports (i.e. why did we have to set it up as id separately?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The item property seems to be used in some parts of the front-end. I just kept it here rather than taking on the task of unifying them under one name right now.

['A1', 'A2', 'A3'],
['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B8'],
['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9'],
['A:A1', 'A:A2', 'A:A3'],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a little suspicious to me - why are we updating this to the colon separated item values, when this is specifically the QUESTION_IDS with no other changes to the tests - or are we just not making the correct assertions about this in the tests for this to matter, and this should actually be the ITEM_IDS not the QUESTION_IDS?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that it's worth updating the API for the selectQuestions function a bit because it now uses the unique composite IDs rather than the question/exercise ids separately.

I'll look at it again and reconsider some of the naming and add comments -- might come up with a follow-up "Refactor selectQuestions & tests" issue

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this commit hoping to clear up the expectations in the test & selectQuestions function call.

I have been inclined to just make id the key for the composite value rather than item and have a follow-up issue on deck to update the places that use item to call it id and add comments

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there might be an argument to double down on the item instead. Can double check when I am back at my desk, but it is used fairly consistently into the logging layer as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See here where it ultimately ends up in the backend

item = models.CharField(max_length=200, validators=[MinLengthValidator(1)])

I think there's probably some systemic updates to be made to be more consistent.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome thanks for this! I've created this issue to align quiz creation with the use of item for the unique ID in particular.

@@ -15,7 +14,7 @@ import { ExamResource, ContentNodeResource } from 'kolibri.resources';
* @returns {array} - pseudo-randomized list of question objects compatible with v1 like:
* { exercise_id, question_id }
*/
function convertExamQuestionSourcesV0V2(questionSources, seed, questionIds) {
function convertExamQuestionSourcesV0V1(questionSources, seed, questionIds) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I see in the JSDoc, it seems that the purpouse was converting v0 to v1, so probably the function had a wrong name?

I looked for convertExamQuestionSourcesV0V2 matches in the code, and I found this in a comment: https://github.com/AlexVelezLl/kolibri/blob/366b38342aa43930ce1c0f2a2c9b695258b40841/kolibri/core/exams/models.py#L134. I think it would be valuable also update this name there.

copy.counter_in_exercise = copy.counterInExercise;
return annotateQuestionSourcesWithCounter(
questionSources.map(question => {
const copy = question;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This copy object here is doing nothing I think. The code is equivalent as if we replace all copy references with question, as the pointer is the same. If the intention of the first implementation was to avoid editing the original object, we need to make a copy like const copy = { ...question }

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch thank you!

if (data_model_version <= 2) {
exam.question_sources = convertExamQuestionSourcesV2toV3(exam);
}
if (data_model_version <= 3) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove this if here, and just excecute the body for all cases. If in the future we have a v4, Wouldn't we need to annotate the questions with item?

@@ -15,7 +14,7 @@ import { ExamResource, ContentNodeResource } from 'kolibri.resources';
* @returns {array} - pseudo-randomized list of question objects compatible with v1 like:
* { exercise_id, question_id }
*/
function convertExamQuestionSourcesV0V2(questionSources, seed, questionIds) {
function convertExamQuestionSourcesV0V1(questionSources, seed, questionIds) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just realized that this function indeed returns a V2 structure, since it includes the counter_in_exercise prop.

export async function convertExamQuestionSources(exam) {
const { data_model_version } = exam;

return new Promise(resolve => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are already using the async keyword in the function signature, wouldnt be better to use await instead of this promise? So the code is flatter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Old habits die hard 😅 - good idea! I'll revise a bit

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexVelezLl looks so much nicer using await - lemme know what you think

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yees, this looks so much nicer! 👐

@nucleogenesis nucleogenesis added the P0 - critical Priority: Release blocker or regression label May 3, 2024
@nucleogenesis nucleogenesis linked an issue May 3, 2024 that may be closed by this pull request
5 tasks
Copy link
Member

@rtibbles rtibbles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still a couple of things that need to be tweaked here. And one question about the reports.

return exam;
}

// TODO: Reports will eventually want to have the proper section-specific data to render
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not, as in within this PR, or not within this release? Won't that just give us the same extremely long question issue in reports that we have been trying to avoid in the taking of quizzes?

}

return annotateQuestionsWithItem(exam.question_sources);
// TODO This avoids updating older code that used `item` to refer to the unique question id
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this needs to be updated/removed in line with this issue, which is binding us to the opposite course of action: #12127

Maybe we should just update the annotateQuestionsWithItem to operate on a question_sources in the v3 shape?

methods: {
saveQuizAndRedirect() {
this.saveQuiz().then(() => {
this.$router.replace({ name: PageNames.EXAMS });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -55,61 +55,63 @@ export function showExam(store, params, alreadyOnQuiz) {
contentNodes => {
if (shouldResolve()) {
// If necessary, convert the question source info
const question_sources = convertExamQuestionSourcesToV3(exam, { contentNodes });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not here, but in the unedited code above, couldn't this use the updated util function that you made in this PR that fetches the exam and the content nodes all at once?

@radinamatic
Copy link
Member

radinamatic commented May 8, 2024

Tested the latest EXE asset on Windows 10, using Firefox and Edge for coach and learners, respectively.

I can appreciate that the backward compatibility was the main scope here, and I had planned to test it upgrading from a previous version, but several issues were present even out-of-the-box, that made more sense to report them first, before falling down the upgrade rabbit hole.

  1. Coach can create and save the quiz, but cannot re-open it to edit questions/resources (just the quiz title and assignees, same as in previous versions)

  2. Expand all and Collapse all buttons are not working, and neither is opening the accordion panel to preview questions, neither with mouse nor with keyboard. I can see the TypeError: _vm.activeResourceMap[question.exercise_id] is undefined error in the console.

  3. Learner can start the quiz, but they cannot complete it, the Submit button is nowhere to be seen. For the same reason they cannot see the quiz report.

    no-submit

    And no idea where the 👻 100 additional questions are coming from.

  4. Coach can see the reports, but they all report learners have started as it's not possible to complete (see nº 3.).

  5. Difficult questions are not displayed. I had 3 learners give incorrect answers to same 6 questions.

    difficult-questions

I have also some keyboard navigation specific issues, will report the summary later. It might be better to address these issues first, before testing the backward compatibility.

db-logs.zip

@rtibbles
Copy link
Member

rtibbles commented May 8, 2024

cannot re-open it to edit questions/resources

This continues to be intentional, although we could potentially add some leeway here in the case that the quiz has never been activated. The issues arise when a coach edits a quiz when students have already taken it, as without some sort of version control in place, we would not be able to properly parse responses from older versions.

@marcellamaki
Copy link
Member

add some leeway here in the case that the quiz has never been activated

this seems like it could be useful, as with the new editing options, being able to edit/revise the quiz (before made visible to learners), seems more like expected behavior than with our current quiz selection/creation flow. But I don't know that it necessarily needs to be MVP. If it's not though, maybe some helper text at "save" would be useful.

@radinamatic
Copy link
Member

Ditto to what Marcella said: we're adding more complex (enhanced) features to quiz creation, but we're not offering any options for revision and corrections? 🤔

Agreed that the limitation needs to be before the quiz is made visible to learners, but I can only imagine the frustration of the coach who invested time and effort to carefully craft the sections, add instructions and descriptions, select and order questions, and then they have to do it all over from scratch just because they discovered a typo after saving... 😭

Not to mention that error prevention is part of the WCAG 😉

@radinamatic
Copy link
Member

At a minimum if we decide not to include it in the MVP, we must add the Please review carefully the quiz as you will not be able to edit it after saving modal, to warn the users, but honestly that just looks... well, just ...unpolished (to put it mildly). 😞

@nucleogenesis
Copy link
Member Author

Users can duplicate a quiz, so in that case I think it makes even more sense to include the ability to "re-open" a quiz for editing within the quiz creation tool.

I think that in the medium-term it'd be ideal for us to revise some of the UX around this, but in the short term I think that we could get @tomiwaoLE 's thoughts on how/where to put the button / option to edit a quiz.

I think maybe having an "Edit quiz" button next to / near the "Start quiz" button might be worth considering along with adding text to the "Start quiz" modal (where we say 'this will make learners download 1234kb') to say "You will no longer be able to edit this quiz" or something?

@radinamatic
Copy link
Member

radinamatic commented May 8, 2024

(continued)

  1. That quiz with 107 questions kept throwing error when the learner inputs the answer on the last questions and clicks the Next button (or presses Enter when navigating by keyboard).

    last-question

Keyboard navigation issues

  • When focusing to select the Edit section option and pressing Enter, the drop-down remains visible even after the side panel with Section settings is opened.

    floatting-dropdown
  • Number of questions field works perfectly, the user can type and increase/decrease the number with arrow keys 💯
    However, if they navigate further and focus the + button, it goes wild and each Enter key press adds (or concatenates) a number 1 to the number of questions... 🤪

    crazy-plus

    I did delete all those 1's to be able to proceed, but could that be why the quiz in question had 100 👻 questions?

  • When focusing the Change resources button, pressing Enter and selecting resources to perform the change, when the work in that side panel is completed, whether user presses the focused Save changes button, or they decide to abandon and focus the Back ⬅️ button, the focus should return to Change resources button that initiated that whole workflow. As of now, the Close ✖️ button in the upper right corner is focused Instead.

  • By the same logic, if the Options button opens the Section settings, when that side panel is closed, the focus should return to the Options button.

  • The Section order section inside Section settings 🙃 is not keyboard navigable: the focus jumps right over it and lands on the Delete section button.

    jump

  • The 🔽 and 🔼 buttons to reorder questions by keyboard are focusable, but they do not work, nothing happens after pressing either Enter or Space. Not sure if that is because the whole accordion is not working in this asset...

@nucleogenesis
Copy link
Member Author

@radinamatic

I've fixed the accordion expand/collapse issue.

I also fixed the issue where you ended up with 107 questions when it was supposed to be 17... Javascript is fun because it ended up that rather than trying to add 10 + 7 (two numbers) it was instead adding 10 + "7" (a number and a string) which results in forcing the number 10 into a string "10" then adding a "7" to it for "107"... so now I've made sure it always is working with numbers 😅

You should be able now to create a quiz and test it. I'm honestly not super sure exactly why the number was being passed around as a string rather than a number -- like, I feel I should have noticed this with all of the quizzes I've been taking. I think that maybe it was to do with whether or not you click the +/- buttons to change the number or type in the number. When I selected the box and typed in 7 on the second section, it ended up thinking there were 107 questions. But as far as I can remember I virtually always click the buttons rather than type it in 🤷🏻‍♂️

Fun and silly bug :)


I mentioned to @marcellamaki in a chat today that I did find it weird that I couldn't submit a quiz so I looked back at 0.16 and it's always visible there.

I looked in the Figma thinking that I'd find a discussion around the fact that the new designs remove the previously always-visible "Submit quiz" button that you'd see in 0.16.

So I'll ask for them to weigh in on that separately from this PR.

@pcenov
Copy link
Member

pcenov commented May 9, 2024

Hi @nucleogenesis, great to see so much progress made here! Wow! Today I've started testing scenarios for users upgrading from Kolibri 0.15.2 to this latest version. So far I am happy to report that almost everything is working perfectly fine and as expected.
Here are a few issues that I was able to reliably replicate:

  1. Missing value for 'File size to download' when starting a quiz which was created on 0.15.2 but not started until after the upgrade:
Missing.File.size.to.download.mp4
  1. Cannot view a question listed as a difficult question:
Cannot.open.a.question.in.Difficult.questions.mp4
  1. There's just an empty gray rectangle at the place of the section title for quizzes created in 0.15.2:

section placeholder

  1. For one particular quiz I was able to stumble upon a difference in the number of questions answered correctly between the report I was seeing as a Coach in 0.15.2 and the one in the current version. I'll be further testing tomorrow to see if I can reliably replicate it:

Questions answered correctly

Here are both home folders in case you need them:

kolibri16HomeFolder.zip
kolibri15HomeFolder.zip

Copy link
Member

@rtibbles rtibbles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing blocking, but still some questions.

/**
* @returns {Promise} - resolves to an object with the exam and the exercises
*/
export async function fetchExamWithContent(exam) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function name is mildly misleading, as it suggests this function fetches the exam, whereas it actually requires the exam to already have been fetched and passed in as the only argument.

@@ -557,6 +561,7 @@
left: 0;
padding: 1em;
margin-top: 1em;
background-color: #ffffff;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nucleogenesis nucleogenesis merged commit 8ff5292 into learningequality:develop May 14, 2024
31 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
APP: Coach Re: Coach App (lessons, quizzes, groups, reports, etc.) APP: Learn Re: Learn App (content, quizzes, lessons, etc.) DEV: frontend P0 - critical Priority: Release blocker or regression SIZE: large SIZE: medium
Projects
None yet
7 participants