Social Security Tribunal of Canada

An evaluation of how easy it is to read decisions of the Social Security Tribunal

Methodology

Based on the training received by members, the Tribunal’s Evaluation unit developed a method to test how easy it is to read a sample of decisions, focusing on how well the constituent elements of a decision are constructed and contribute to readability. The study applied three methods to collect and triangulate data on readability. First, the Evaluation unit created a scorecard of 16 questions (15 in the French version) to look for and test elements of readability, most of which were covered in the members’ training. The scorecard divided each decision into three categories – overview, structure and style – and drilled down to specific elements within each category (see Annexes A and B for the complete scorecards). Scoring was based on different rating scales, depending on the element1.

The Tribunal’s Evaluation and Linguistic Services units collaborated to read and score each sampled decision. The Evaluation unit then analyzed the data to find patterns and trends, and to compare different member subgroups (such as division, stream, official language) and different periods (that is, before training and after). The study did not examine interlocutory decisions or the correctness of decisions.

Complementing the scorecard was a survey sent in October 2020 to all members to collect their perspectives and experiences writing in plain language. Annex D contains the survey and responses.

Finally, the study tapped the Flesh-Kincaid readability test, an automated formula commonly used to assess the readability of a given document. The formula only calculates metrics related to sentence length and word length, and for that reason does not factor heavily in the study’s conclusions beyond assigning a notional grade reading level for each decision.

The sample

Sample breakdown

There were 189 decisions sampled in total. These decisions fell into the various categories noted in the table below. (Note that one decision could fall into multiple categories.)

Category Number of decisions
English 140
French 49
Appeal Division 21
General Division – Income Security 101
General Division – Employment Insurance 67
Represented appellants 76
Self-represented2 appellants 113
Before training 103
After training 86

The study assembled a sample of 189 decisions rendered in 2019 and 2020.

From each member working in English, two decisions were randomly chosen, one rendered before the most recent plain language session in November 2019, and one after.

On the French side, four decisions were randomly selected from each member, two before plain language training in February 2020, and two after.

Every attempt was made to sample from every member. However, leave, availability or end of mandate limited what we could sample from certain members.

Overall results

Bar graph showing the overall results of the readability evaluation for decisions at the Tribunal.
Text version
Lower point scores indicate decisions that are challenging to read. Higher point scores indicate decisions that are easy to read. The letter “n” refers to the number of sample decisions.

Percentage of decisions that scored less than 5 points: 7% (n=14)
Percentage of decisions that scored 5 to 7 points: 22% (n=42)
Percentage of decisions that scored 8 to 10 points: 27% (n=51)
Percentage of decisions that scored 11 to 13 points: 27% (n=51)
Percentage of decisions that scored 14 to 17 points: 16% (n=31)

A decision tested against the scorecard can score a maximum of 17 points (16 in the French version) and a minimum of -63.  Of the 189 decisions, 16% (n=31) were easily readable, having scored between 14 to 17 points. Roughly 30% (n=55) were difficult or frustrating to read, and scored 7 points or less (see table above).

What are the characteristics of the most readable decisions? As this method is untried, no scorecard targets or benchmarks were set prior to the study. But these decisions showed a clear appreciation of the hurdles that self-represented appellants4 face, and actively supported them with a clear and plainly written Overview, and a writing style that constantly guided and informed the reader through to the end.

These 31 reader-friendly decisions did not correlate with any particular subgroup of members. They spread out among 22 different members across all three streams in both official languages, and were issued to appellants both with and without a representative. A deeper analysis does reveal key readability differences among these subgroups in relation to some of the scorecard questions.

Overview

What we measured

Every decision began with a distinct Overview section, which we assessed against the following four indicators:

  • A1. Is the essence of the story clear?
  • A2. Is the basic question that the member must decide both clear and precisely stated?
  • A3. Was any unnecessary detail included?
  • A4. Is the decision or result clear?

Why we measured this

Tribunal members received training on writing with clear and logical structures. At the apex of the decision structure is the Overview. An effective Overview prepares the reader with essential background facts and information and launches the reader onto the decision path taken by the member.

What the results say

Table showing the results of the Overview evaluation.
Text version
A1. Is the essence of the story clear? Yes, to lay readers
58%
Yes, to lawyers
35%
No
6%
A2. Is the basic question to be decided clear and precisely stated? Yes, to lay readers
42%
Yes, to lawyers
25%
No
32%
A3. Was any unnecessary detail included? No
89%
Yes
10%
A4. Was the decision or result clear? Yes, clear at start
69%
Yes, at end
19%
No
13%

Strengths

Bar graph showing the readability of decisions broken down by self-represented and represented appellants.
Text version
Represented appellants Self-represented appellants
A1: Is the essence of the story clear? 50% 64%
A2: Is the basic question to be decided clear and precisely stated? 34% 47%
A3: Was any unnecessary detail included? 85% 91%
A4: Was the decision or result clear? 67% 70%

We found that Overviews written after plain language training scored better than those written before training did by a notable, if not wide, margin — 4% to 14% depending on the question. Members appeared to confirm this in the survey where they reported unanimously that they believed their reasons are now more readable (60% much more, 40% somewhat more) after taking the training.

Similarly, Overviews written for self-represented appellants scored modestly higher than those written for appellants with a representative by a margin of 6% to 14%, depending on the question (see chart above). Taken further, a comparison between the subgroup of decisions written for self-represented appellants before the last member training and the subgroup of written after, shows that later Overviews read clearer and were more complete. This finding aligns with members’ own feedback—that all of them improved after training and about half of them now adjust the language of their reasons to their intended audience.

These strengths were evenly distributed across the streams but one subgroup that stands out are decisions written in French. More Overviews in French scored to the plain language standard than English Overviews by margins ranging from 10% to 33%.

Another largely positive finding was the 69% of decisions that gave the result at the beginning, usually in the first sentence, to the plain language standard while 19% were worded at a level suitable to a lawyer. A small but not insignificant minority of 13% (n=24) did not reveal the result until further in the body of the reasons or at the conclusion.

Opportunities

In over 10% of decisions sent to the parties (as opposed to decisions later anonymized for posting on the Tribunal website), the Overview referred to the parties as either the “appellant” or “respondent” but never identified by name who was whom. A layperson not familiar with these terms could infer by context which party they were, but that task was not helped when Overviews also started off with abbreviations without explanation (e.g. MQP) or legal terminology (e.g. just cause, exercise power judicially).

The largest negative finding was in question A2 where nearly one-third of decisions omitted from the Overview a clear and precise statement of the issues under appeal. In these decisions, the Overview started with an account of the key facts in the appellant’s story that a reader could expect was building to a statement of the issues. Instead, these decisions only revealed the issues for the first time several paragraphs later in the Issues section.

The biggest positive finding was in question A3 where 89% of Overviews left out unnecessary detail, such as exact dates, locations, medical conditions, and employment history. A concise Overview, however, was offset sometimes by a detailed and distracting section on Preliminary Matters. A small minority of decisions used Preliminary Matters to describe administrative or procedural history. But the unintended effect was to interrupt the flow created in the Overview. When a lay reader emerges from a particularly lengthy Preliminary Matters section to the member’s analysis, flipping back and forth to the Overview became sometimes necessary to find where the trail left off. The interruption could potentially be lightened if the section is kept brief and features transitional words or phrases.

Structure

What we measured

Every member organized their reasons to link together the evidence, their findings, conclusions and the law. We assessed the structure of those reasons against these indicators:

  • B1. Are the headings and sub-headings case-specific or generic?
  • B2. Can you skip to one discrete issue if you wanted to understand the law or the conclusion on that particular issue?
  • B3. Within each issue-section, is the issue set out clearly and early?
  • B4. Do paragraphs observe the point-first approach?
  • B5. Does the member clearly explain how the law applies to each issue or the facts?

Why we measured this

All members received training to structure reasons in a smooth and logical manner to be both readable and, effectively, “raidable” to the parties. A raidable decision is one that is organized and written in way that allows the reader to scroll for specific content by following markers laid down by the member, such as headings, topic sentences, and transitions.

What the results say

Table showing the results of the structure evaluation.
Text version
B1. Are most headings and sub-headings case-specific (vs generic)? Yes
43%
No
57%
B2. Can you skip to one discrete issue? Yes
81%
No
19%
B3. Within each sub-section, is the issue set out clearly and early? Yes
66%
No
34%
B4. Do paragraphs observe the point-first approach? Yes
89%
No
11%
B5. Does the member clearly explain how the law applies to each issue or facts? Yes, to lay readers
43%
Yes, to lawyers
56%
No
2%

Strengths

When organizing the issues, 81% (n=154) of the time members adopted the issues-driven approach to separate the issues into broad, discrete sections with self-contained analyses. All decisions used headings and subheadings, and 80% of decisions applied point-first writing, the practice of articulating the point of a paragraph followed by supporting details.

While members universally used headings and subheadings to signpost the reader, nearly all were of the generic variety, such as Introduction, Issues, Analysis, and Conclusion. An improvement on the generic heading is a case-specific heading that thoughtfully describes the section that follows. Case-specific headings offer an outline of the member’s reasoning and allow the reader to follow the member’s logical progression from one issue to the next. The best scoring decisions consistently phrased their headings and subheadings as questions or conclusions on an issue. Forty-three percent (n=82) of decisions used case-specific headings most of the time while 57% (n=107) seldom used them or only used them as sub-headings.

Opportunities

The study also found that one-third of decisions did not consistently set out the issues clearly and early in the Issues section. In some decisions, the issues were stated at the outset but were too legalistic for the lay reader, while in other decisions the issues were clear but not revealed until several paragraphs into the Issues section. Still, in other decisions, multiple issues and legal tests posed a particular challenge even when the member stated them upfront. A particularly lengthy analysis often meant imposing on the lay reader the burden of flipping back and forth to thread findings to one of the multiple issues and its legal test. Higher-scoring decisions avoided this challenge by tactically inserting case-specific subheadings and repeating the issues and the law as necessary.

Bar graph showing what percentage of explanations were clear to a lawyer, clear lay reader, or were too generic or had no explanation.
Text version
Self-represented appellants Represented appellants All decisions
Clear to a lawyer 53% 59% 56%
Clear to a lay reader 45% 39% 43%
Generic or no explanation 2% 1% 2%

The lowest performing area was in question B5, “Does the member clearly explain how the law applies to each issue or the facts?” We found that 98% of decisions explained the law, but only 43% of decisions explained the law at a level suitable to a lay reader. Taken further, if the appellant was self-represented, the figure rises slightly to 45% but drops to 39% if represented (see chart above). These results are consistent with the tenor of the feedback members shared through the survey. Members continue to find themselves honing their ability to reduce complex legal concepts into plain language in a way that does not compromise accuracy and become the basis for an appeal or judicial review. One member captured the common sentiment:

[It’s] not really a lack of training but fear that a plain language explanation of the legal test may be viewed by the [Appeal Division] as inaccurate and therefore being an error of law. Also, it is primarily a question of practice!

A member of the Appeal Division voiced similar concern about the inherent risk in paraphrasing technical language:

Sometimes when describing the arguments of the Minister/Commission, I worry that if I paraphrase too much, they will take issue with it. So sometimes my descriptions of the Minister's submissions are a little less plain language to better reflect the actual content of their submissions. 

The data does suggest that plain language explanations of the law improved incrementally in the months immediately after plain language training (see also Annex C: Self-represented versus Represented Appellants). To achieve better results, members expressed a need for more time to practice this craft and, for some, supplementary aids they describe below: 

What additional training or support would you like from the Tribunal? (Select all that apply)
Decision template 48%
Plain language guide or updated style guide 48%
Editing tools or software 43%
Training or refresher sessions 43%
Glossaries 28%
Don’t know 10%

Style

What we measured

The Tribunal trained members to write more clearly and simply. This section examines elements of members’ writing style against the following indicators:

  • C1. Are there any string citations that really interrupt your reading?
  • C2. Are there long block quotations that really interrupt your reading?
  • C3. Overuse of legalese or jargon?
  • C4. Overuse of nominalizations or expletive constructions?
  • C5. Are there paragraphs that could have been broken up or replaced by lists?
  • C6. Passive sentences that should be active?
  • C7. Once you finish reading, think about the writing style generally; is it great, very good, average, needs improvement, terrible?

Why we measured this

Style earns trust, and the appellant is more likely to trust that their appeal was fairly decided if they find the reasons explained in a clear and simple style of writing versus one that is opaque and legalistic.

Table showing the results of the style evaluation.
Text version
C1. Are there any string citations that really interrupt your reading? Yes
3%
No
97%
C2. Are there long block quotations that really interrupt your reading? Yes
7%
No
93%
C3. Overuse of legalese or jargon? Yes
40%
No
60%
C4. Overuse of nominalizations or expletive constructions? Yes
20%
No
80%
C5. Are there paragraphs that could have been broken up or replaced by lists? Yes
25%
No
75%
C6. Passive sentences that should be active? Yes
51%
No
49%
C7. Overall, the writing style was… Great
2%
Very good
28%
Average
53%
Needs Improvement
15%
Terrible
2%

Strengths

The study examined the frequency of style choices known to hinder readability. Writing style of course varied among members, but we observed common practices across the Tribunal. The vast majority avoided obtrusive string citations5 (97%) and long block quotations (93%), perhaps because these are more a feature of court judgements. Further, 80% (n=151) of decisions avoided the common writing pitfalls of excessive nominalizations (e.g. draw an inference rather than infer) and expletive constructions (e.g. It is, There are). English decisions were more successful than French decisions in limiting their use, but overall, no decision was free of these practices, and some were reasonable to the context.

The study also found that members in 75% (n=141) of decisions were conscious about not overwhelming readers with lengthy content. Rather, members limited paragraph size by breaking up content into manageable chunks, such as a series of short paragraphs or numbered lists. Decisions in French (84%) and decisions to self-represented appellants (81%) scored high in this regard.

Opportunities

Survey responses tell us that members find that writing in active voice is one of the plain language strategies that continues to elude many of them. Scoring found that in 51% (n=96) of decisions, passive voice was used excessively or avoidably. That figure rises markedly in key subgroups—appeals in English (62%), Appeal Division (62%), and represented appellants (59%). The scores generally did not improve after plain language training. Desktop tools like Antidote, which is available to members, can help identify and reduce passive voice, and the 51% indicates room for improvement. But the fact that nearly half (49%) of decisions used passive voice with purpose (e.g. to emphasize the object of the action) may indicate that balancing its use is a more realistic goal than stamping it out. As one member put it:

Avoiding the passive is not always possible given the nature of what we have to decide. I've eliminated it where I can, but there are times when it cannot be avoided.

Members also told us they are conscious not to overuse jargon, and many offered a plain definition when they used it. Nonetheless, scoring found 40% (n=76) used legalese or jargon excessively, including a high of 71% (n=15) of Appeal Division decisions. Examples are wide ranging but include “ordinarily”, “prejudice”, “just cause”, and “dispose of an appeal”. French examples include prépondérance des probabilité, sciemment, justice naturelle, and fardeau de la preuve. But like switching from passive to active voice, members explained that converting legal terms to plain language is an ongoing balancing act to satisfy readability objectives and legal accuracy. The study did find significant improvement after plain language training when jargon overuse fell from 47% to 33% of decisions examined. 

Impact of training

The results show that over time the overall readability of the Tribunal’s decisions improved at a slow but steady pace. The median score of all decisions before the most recent language training was 9 out of 17 points. That improved to 11 after training. Before training only 34% of decisions scored between 11 and 17 points whereas 51% scored in this upper range after training. The bell curve below shows a rightward shift of scores over time driven by improvements to overviews, plainer explanations of the law and the reduced incidence of legalese and jargon. Other scorecard questions saw stable or statistically insignificant results.

Bell curve graph showing the median readability scores for decisions before training and after training.
Text version
Distribution of scores before and after training.

The bell curve shows that median scores improved after training. Before training, the median score was 9 out of 17. After training, the median score was 11 out of 17.

Where grade reading level is concerned, the bell curve shows only a slight leftward shift from an average grade of 10.7 to 10.3. But the taller and narrower shape of the black curve to the left indicates that more decisions have begun to cluster around grade 10 with fewer decisions reading at grades 12, 13 and 14.

Both sets of bell curves illustrate that decision writing is moving in the desired direction. But the extent that decision writing improved due to training, the availability of writing tools and aids, or simple practice over time, is not explained by the data. Most likely all three factors made a difference.

Bell curve graph showing the median grade reading level for decisions before training and after training.
Text version
Grade reading level before and after training

The bell curve shows that median grade reading levels fell at around a grade 10 level both before and after training. However, after training, fewer decisions were at grade 12, 13, and 14 levels.

Members first received plain language training in January 2018, then subsequent sessions or supporting tools, and at least 15 months to practice before this study began to sample their decisions for analysis. Some members expressed challenges in applying training recommendations, particularly those delivered by non-legal professionals or inconsistent with templates or internal messaging. Against this backdrop, each member has had to find their own way to write for diverse readership in their own voice as independent adjudicators. One member summed up the journey in personal terms:

The decision templates are not in my voice and often do not contain terms or explanations that I've given in the hearing. I make a point of confirming the claimant understands any and all terms during the hearing, and then I make a point of repeating this in the decision. I prefer to write in plain language in my own voice and reflecting the discussions I've had with the claimant.

Conclusion

Before the study launched, the Chairperson of the Tribunal set a reading target of grade 9 for all decisions. The study applied the Flesch-Kincaid readability test to calculate the grade reading level of each decision. It found that 32% (n=60) met that target and a further 33% (n=62) followed closely at grade 10. Because the formula works by quantifying sentence length and word length, the study developed the scorecard tool to qualitatively measure the organization of thoughts, word choice, flow, and coherence. Neither method offers certain results, but together, with member feedback, a clearer picture emerges from the data.

Strengths

  • Members benefitted from training. This shows in aggregate scoring and in key subgroups, particularly decisions for self-represented appellants and French-speaking appellants.
  • The vast majority of decisions stated the result and the issues at the outset.
  • Most structure their decisions into discrete issue-sections using the point-first approach and headings and subheadings with a mix of generic and case-specific styles.
  • Most decisions avoided lengthy paragraphs and broke them up into lists or smaller paragraphs.
  • There was near universal avoidance of long string citations, block quotations, and to a lesser extent, of writing traps like nominalizations, expletive constructions, and passive sentences.

Opportunities

  • Overviews should always open with a statement of the issues and result, spell out acronyms and identify the parties.
  • Issue sections should state the issues and legal test clearly and early, and again where the analysis is lengthy or involves multiple issues.
  • To encourage flow, the Preliminary Matters section should be appropriately brief and contain transitions.
  • Jargon could be reduced and plainer explanation of the law encouraged (without compromising legal accuracy).
  • Headings and subheadings should be crafted with case-specific content as much as possible.
  • Consider providing members with a workplace environment they identified as helpful to improve their writing: time to practice free of barriers in the form of unreasonably high performance standards or the pressure to get decisions out quickly; and ongoing professional development and supplementary aids, such as decision templates, plain language guides, updated style guides, editing tools or software, and glossaries.

Adjudicators suffer no shortage of expert advice on writing decisions offered through online guides, conferences and workshops6. Attempts to systematically measure readability to reveal insights that drive continuous improvement are rare. This report is that attempt in the Social Security Tribunal’s ongoing pursuit of enhanced access to justice for Canadians.

Annex A: Readability scorecard for decisions in English

A. Read the overview Possible points Points you awarded Comments
A1 Is the essence of the story clear? YES – clear to a layperson
YES – clear to a lawyer
NO – unclear, or I had to re-read/read further
2
1
0
A2 Is the basic question that the member must decide both clear and precisely stated? YES – clear and precise to a layperson
YES – clear and precise to a lawyer
NO – unclear or imprecise or unstated
2

1

0
A3 Was any unnecessary detail included? YES – included
NO – left out
0
1
A4 Is the decision or result clear? YES – clear at start
YES – clear, but I had to skip to the end
NO – unclear or unstated
2
1

0
B. Check the structure
B1 Are the headings and sub-headings case-specific or generic? Answer “YES” is the headings and sub-headings are case-specific most of the time YES – Case-specific all or most of the time
NO – Generic or no headings
1

0
B2 Can you skip to one discrete issue? (If you wanted to understand the law or the conclusion on that particular issue) YES – clear or not applicable (only one issue)
NO – locating discrete issues is difficult
1

0
B3 Within each issue-section, is the issue set out clearly and early? YES – clearly and early
NO – one but not the other or not at all
1
0
B4 Do paragraphs observe the point-first approach? Choose “YES” if all or most substantive paragraphs started with a topic sentence YES
NO
1
0
B5 Does the member clearly explain how the law (statute and case law) applies to each issue or the facts? YES – clearly to a layperson
YES – clearly to a lawyer
NO – cites the law generically or not at all
2
1
0
C. Look for style choices
C1 Are there any string citations that really interrupt your reading? Choose “NO” if any string citations are in footnotes or are otherwise unobtrusive. (Click here for definition.) YES – distracting
NO – none or string citations are unobtrusive
0
1
C2 Are there long block quotations that really interrupt your reading? Choose “NO” if there are no block quotations or they come with a summary or statement of inference that the reader should make. YES – one or too many
NO
0
1
C3 Overuse of legalese or jargon? YES (use Antidote; write examples in the Comments box) -1
C4 Overuse of nominalizations or expletive constructions? Click here for examples. YES (write examples in the Comments box) -1
C5 Are there paragraphs that could have been broken up or replaced by lists? YES (write paragraph number in Comments box)
NO
-1

0
C6 Passive sentences that should be active? Choose “NO” is none or if passive voice was used for an apparently intended purpose. YES (use Antidote; write examples in the Comments box)
NO
-1


0
C7 Once you finish reading, think about the writing style generally (ignoring structure and content). Especially consider how many instances of each bad writing style you found. Was the writing style generally:
  • Great! – could use as an example
  • Very good
  • Average – somewhere in the middle
  • Needs improvement
Terrible – dense, wordy, confusing
2
1
0
-1
-2
D. Final Scoring
D1 Total Above:
D2 Flesch Reading Ease (use Antidote)
D3 Flesch Kincaid Grade Level (use Antidote)
D4 Words (use Antidote)
D5 Tester’s name:
D6 File number:

Annex B: Readability scorecard for decisions in French

A. Lire l’aperçu Nombre max. de points Points attribués Commentaires
A1 L’essentiel du récit est-il clair? OUI – clair pour une personne non initiée
OUI – clair pour une avocate ou un avocat
NON – manque de clarté, ou j’ai dû relire ou lire plus loin
2

1

0
A2 La question principale que doit trancher la ou le membre est-elle énoncée de façon claire et précise? OUI – claire et précise pour une personne non initiée
OUI – claire et précise pour une avocate ou un avocat
NON – manque de clarté ou de précision, ou n’est pas énoncé
2

1

0
A3 Des détails inutiles ont-ils été inclus? OUI – inclus
NON – omis
0
1
A4 La décision ou le résultat sont-ils clairs? OUI – clairs dès le départ
OUI – clairs, mais j’ai dû aller voir à la fin du document
NON – manque de clarté ou ne sont pas énoncés
2
1


0
B. Vérification de la structure Nombre max. de points Points attribués Commentaires
B1 Les titres ou sous-titres sont-t-ils propres au cas ou génériques? (« Faits », « Droit applicable », « Questions en litige », « Analyse » sont génériques plutôt que propres au cas.) Répondre « OUI » si les titres ou les sous-titres sont généralement propres au cas. OUI – tous ou majoritairement propres au cas
NON – génériques ou aucun titres
1


0
B2 Pouvez-vous passer directement à une question en litige précise (si vous souhaitez comprendre le droit ou la conclusion par rapport à cette question en litige particulière)? OUI – clair ou ne s’applique pas (une seule question en litige)
NON – il est difficile de cerner les questions en litige distinctes
1


0
B3 Dans chaque section portant sur une question en litige, la question en litige est-elle établie clairement et rapidement? OUI – clairement et rapidement
NON – un mais pas l’autre, ou pas du tout
1

0
B4 Est-ce que la ou le membre explique clairement comment le droit (lois et jurisprudence) s’applique à chaque question en litige ou aux faits? OUI – clair pour une personne non initiée
OUI – clair pour une avocate ou un avocat
NON – cite le droit de façon générique ou pas du tout
2

1

0
C. Évaluation des choix de style Nombre max. de points Points attribués Commentaires
C1 Y a-t-il des chaînes de renvois qui interrompent vraiment votre lecture? Choisir « NON » s’il y a des chaînes de renvois dans les notes en bas de page ou qui ne nuisent pas à la lecture (cliquer ici pour une définition de « citations en chaîne ») OUI – distrayant
NON – aucune ou elles ne nuisent pas à la lecture
0
1
C2 Y a-t-il de longues citations en bloc qui interrompent vraiment votre lecture? Choisir « NON » s’il n’y en a pas ou si elles sont accompagnées d’un résumé ou d’une inférence que devrait faire le lectorat. OUI – une ou trop nombreuses
NON
0

1
C3 Surutilisation de jargon juridique ou de vocabulaire spécialisé? OUI (fournir des exemples dans les Commentaires) -1
C4 Surutilisation de noms là où des verbes auraient pu être utilisés? OUI (fournir des exemples dans les Commentaires) -1
C5 Y a-t-il des longs paragraphes qui auraient pu être subdivisés ou remplacés par des listes? OUI (fournir le numéro de paragraphes dans les Commentaires) -1
C6 Y a-t-il des phrases passives qui devraient être actives? Choisir « NON » s’il n’y en a pas ou si la voix passive semble avoir été utilisée dans un but précis. OUI (fournir des exemples dans les Commentaires)
NON
-1

0
C7 Une fois la lecture terminée, réfléchir au style d’écriture de façon générale (en ignorant la structure et le contenu). Réfléchir particulièrement au nombre de fois où un mauvais style d’écriture a été utilisé. Le style de rédaction était généralement :
  • Très bien! – pourrait être utilisé comme exemple
  • Bien! – pourrait être utilisé comme exemple
  • Moyen – quelque part entre les deux
  • À améliorer – pourrait être utilisé comme exemple
Terrible – lourd, verbeux et porte à confusion
2
1
0
-1
-2
D. Note finale
D1 Total des points  :
D2 Lisibilité de Flesch (utiliser Antidote) :
D3 Niveau de langue Flesch-Kincaid (utiliser Antidote) :
D4 Mots (utiliser Antidote) :
D5 Numéro de dossier :
D6 Nom de l’évaluatrice ou de l’évaluateur :

Annex C: Self-represented versus represented Appellants

Percentage of decisions achieving plain language scores by representation
empty Self-represented Represented
Grade 9 29% 24%
All decisions 72% 64%
AD 64% 62%
GD-EI 73% 63%
GD-IS 73% 66%
English 69% 60%
French 79% 80%
Overview 68% 59%
Structure 66% 61%
Style 79% 71%

Annex D: Survey to members

Do you hear appeals in the (check all that apply):
Appeal Division General Division – EI General Division – IS Total
Q1 – Did you receive training from the Tribunal on plain language decision writing? Yes 8
19%
24
57%
12
29%
42
No 0
0%
0
0%
0
0%
0
Q2 – To what extent would you say that your decisions are now more readable to laypersons since taking the training? Much more readable 4
17%
13
54%
7
29%
24
Somewhat more readable 4
22%
11
61%
5
28%
18
About the same 0
0%
0
0%
0
0%
0
Not sure 0
0%
0
0%
0
0%
0
Q3 – Are there plain language strategies you were taught that you have a hard time applying? If yes, please explain. Yes 5
17%
15
52%
10
34%
29
No 3
23%
9
69%
2
15%
13
Q4 – To what extent do you adjust the language in your reasons depending on whether the appellant is represented? For example, if the claimant is represented by a lawyer, you will write in more formal language while you will write in plainer language if unrepresented. To a great extent 1
100%
0
0%
0
0%
1
Somewhat 4
21%
11
58%
4
21%
19
Very little 1
8%
7
58%
4
33%
12
Not at all 2
22%
5
56%
4
44%
9
Not sure 0
0%
1
100%
0
0%
1
Q5 – To what extent do you adjust the language in your reasons to the claimant’s perceived level of literacy? To a great extent 2
13%
10
63%
4
25%
16
Somewhat 4
25%
10
63%
3
19%
16
Very little 1
20%
2
40%
2
40%
5
Not at all 1
20%
2
40%
3
60%
5
Not sure 0
0%
0
0%
0
0%
0
Q6 – How important is it for you that claimants, rather than representatives, be able to easily read and understand your decisions? Extremely important 7
19%
21
58%
10
28%
36
Somewhat important 0
0%
3
75%
1
25%
4
Important 1
100%
0
0%
0
0%
1
Of little or no importance 0
0%
0
0%
1
100%
1
Question 7 – What additional training or support would you like from the Tribunal? (Check all that apply) Decision templates 3
15%
11
55%
7
35%
20
Glossaries 4
36%
4
36%
5
45%
11
Plain language guide, updated Style Guide 4
21%
12
63%
5
26%
19
Editing tools or software 4
24%
8
47%
6
35%
17
Training or refresher sessions 2
11%
14
74%
5
26%
19
Don’t know 1
25%
2
50%
1
25%
4
Question 8 – If decision templates offering plain language options were available, how likely would you be to use them? Very likely 2
7%
18
67%
8
30%
27
Somewhat likely 4
36%
5
45%
3
27%
11
Unlikely 1
33%
1
33%
1
33%
3
Very unlikely 1
100%
0
0%
0
0%
1
Date modified: