Tag Archives: analytics

Can a computer program help us identify unknown writers? 4

Right now I am sorry to say that I haven’t had great success with the computer program that I was hoping would help us to identify unknown writers. I’m by no means declaring it to be impossible or unrealistic, but I think I will need to ask for help from the experts who wrote the program and/or who do more of this sort of analysis on a day to day basis.

My initial trials were to see if I could test a Jay Over script known to be by him against another one known to be by him, so as to see if the program could pick out a ‘known good’ example. It did do that pretty well, but it may be that I calibrated the program options too closely against Jay Over. I haven’t got to the stage of being able to say that this series of tests, done in this way, gives you a good chance of identifying this text by a known author. (Unless that known author is Jay Over, she says slightly bitterly.) And if I can’t do this reasonably reliably, there is no point (as yet, at least) in moving on to trying out unknown author texts.

In my last post about this computer program, I ran a series of 10 tests against a Jay Over text, and the program reliably picked out Jay Over as the most likely author of that text out of a supplied set of 4 test authors. It was much less reliable in picking out a test Malcolm Shaw text out of the same set of test authors: only 5 of the 10 tests suggested that Malcolm Shaw was the best fit. I have now tried the same 10 tests with an Alan Davidson text (“Jackie’s Two Lives”), and with a Pat Mills text (“Girl In A Bubble”). This means that all four of the test authors have been tested against a text that is known to be by them.

  • Unfortunately, in the test using an Alan Davidson text the program was even worse at picking him out as the ‘best fit’ result: it only did so in 2 of the 10 tests, and in 4 of the tests it placed him in last, or least likely to have written that test text.
  • In the test using a Pat Mills text, the program was rather better at picking him out as the ‘best fit’ result, though still not great: it did so in 4 out of 10 tests, and in 3 of the remaining tests he was listed second; and he was only listed as ‘least likely/worst fit’ in one of the tests.

The obvious next step was to try with a larger group of authors. I tried the test texts of Jay Over (“Slave of the Clock”) and of Malcolm Shaw (“Bella” and “Four Faces of Eve”) against a larger group of 6 authors (Primrose Cumming, Anne Digby, Polly Harris, Louise Jordan, Jay Over, Malcolm Shaw).

  • With the Jay Over text, only 7 of the 10 tests chose him as the ‘best fit’, so the attribution of him as the author is showing as less definite in this set of tests.
  • With the Malcolm Shaw texts, only 1 and 3 tests (for “Bella” and for “Eve” respectively identified him as the ‘best fit’ – not enough for us to have identified him as the author if we hadn’t already known him to be so. (He also came last, or second to last, in 4 of the first set of tests, and the same in the second set of tests.)

I should also try with more texts by each author. However I think that right now I will take a break from this, in favour of trying to contact the creators of the program. I hope they may be able to give me better leads of the right direction to take this in. Do we need to have much longer texts for each author, for instance? (We have generally been typing up just one episode for each author – I thought might be too much of an imposition to ask people to do any more than that, especially as it seemed sensible to try to get a reasonably-sized group of authors represented.) Are there some tests I have overlooked, or some analytical methods that are more likely to be applicable to this situation? Hopefully I will be able to come back with some extra info that means I can take this further – but probably not on any very immediate timescale.

In the meantime, I leave you with the following list of texts that people have kindly helped out with. You may find (as I have) that just looking at the texts themselves is quite interesting and revealing. I am more than happy to send on any of the texts if they would be of interest to others. There are also various scans of single episodes sent on by Mistyfan in particular, to whom many thanks are due.

  • Alison Christie, “Stefa’s Heart of Stone” (typed by Marckie)
  • Primrose Cumming, “Bella” (typed by Lorrbot)
  • Alan Davidson, three texts
    • “Fran of the Floods” (typed by Marckie)
    • “Jackie’s Two Lives” (typed by me)
    • “Kerry In the Clouds” (typed by me, in progress)
  • Anne Digby, “Tennis Star Tina” (typed by Lorrbot)
  • Gerry Finley-Day, “Slaves of War Orphan Farm” (typed by Mistyfan)
  • Polly Harris, two texts
    • “Monkey Tricks” (typed by Mistyfan)
    • “Midsummer Tresses” (typed by Mistyfan)
  • Louise Jordan, “The Hardest Ride” (typed by Mistyfan)
  • Jay Over, two texts
    • “Slave of the Clock” (typed by me)
    • “The Secret of Angel Smith” (typed by me)
  • Malcolm Shaw, five texts
    • “Lucky” (typed by Lorrbot)
    • “The Sentinels” episode 1 (typed by Lorrbot)
    • “The Sentinels” episode 2 (typed by Lorrbot)
    • “Bella” (typed by Lorrbot)
    • “Four Faces of Eve” (typed by Lorrbot)
  • Pat Mills, two texts
    • “Concrete Surfer” (typed by me)
    • “Girl In A Bubble” (typed by me)
  • John Wagner, “Eva’s Evil Eye” (typed by Mistyfan)



Pat Davidson writes

(comment sent by email)
I was interested to read about your computer programme designed to identify authors. If you need another story to test, Alan was the author of the brilliant “Paint It Black” – although this was for Misty, not Jinty [faint carbon copy of one of his invoices attached]. I have carbon copies of some of his actual scripts for various publications, when I can find them, although I know these will be equally faint.

Paint It Black invoice ADavidson

[editorial comment] Of course I need hardly say that any scripts or further information on Alan Davidson and what he wrote will be extremely welcome! The words ‘eager anticipation’ come to mind.

Can a computer program help us identify unknown writers? 3

I have been working hard at calibrating the computer ‘authorship identification’ program mentioned in previous posts. The results are mixed, but offering interesting possibilities.

First of all, I narrowed down the many options offered by the program, by running a series of tests to see how well it picked out a known Jay Over text as being by that author, when compared to a set of four texts with known authorship. To help me narrow down the wide set of options on offer, I also referred to a post by the program’s author where he talks about what tests he used to identify J K Rowling as the author of The Cuckoo’s Nest, which had recently been published under a pseudonym.

JGAAP analysis 1JGAAP analysis 2

There are 10 sets of tests: the two highlighted in light purple represent tests used by the software author in the identification of J K Rowling, and the others are ones that I found through trial and error as being good identifiers of Jay Over’s writing in my test examples. I have got very little idea as to what the tests actually mean, or the numbers in the results, so the theory underpinning all this is pretty opaque to me! This means I could easily be making some sort of obvious beginner’s mistake that would be clear to someone who knew more than me about this, but never mind. This is a sort of calibration run, in effect.

One thing that is interesting and reassuring to see is that there is a distinct gap between the numerical results above for Jay Over as compared to the other texts / authors (see the column ‘Difference’). For instance, for the test ‘Word Ngrams’, the gap between Jay Over and Malcolm Shaw is 382.1303, whereas the gap between Malcolm Shaw and Alan Davidson on the same test is only 64.3022, and the gap between Alan Davidson and Pat Mills is 136.9977. What these units are, I will remain in blissful ignorance of, for now; but it seems to show that Shaw / Davidson / Mills cluster together more closely than they are linked to Over.

What happens when I try it against a different text – one of Malcolm Shaw’s? The results are more mixed this time. Malcolm Shaw is identified as the top most likely author in 5 of the 10 tests, but in 4 of the remaining tests it is Jay Over who is suggested as most likely (and Pat Mills is suggested in the other test). (These tests where the wrong author is found are marked in red below.)

JGAAP analysis 3JGAAP analysis 4

I tried it again with a different Malcolm Shaw text and again Shaw is only picked out as the likely author in 5 of the 10 tests (a slightly different set of 5, to boot).

JGAAP analysis 5JGAAP analysis 6

For now this is not really strong enough evidence to say this program is definitely a good tool for this job, though it does suggest a couple of intriguing ideas. One possible explanation is that maybe in narrowing the tests down, I’ve just picked out tests that are good at looking for some characteristic in texts that Jay Over particularly happens to use in his writing. But another one, which is rather more intriguing, is that, well, we know people wrote under pseudonyms, and we know that Malcolm Shaw / Pat Mills / Alan Davidson are all real names of real people – but do we know for absolute definite whether Jay Over is a real name or a pseudonym – of Malcolm Shaw’s???

Of course this means more tests. Next, I want to try to see how the program gets on with a test text by Alan Davidson and a test text by Pat Mills. Will these show the same sort of mixed results, meaning that we might not be at a point of having a really good check to use with an unknown author? Or will these authors be easier for the program to identify more definitely?

For reference, the following texts were used in the above tests. I am differentiating between ‘test texts’, meaning the ones that the program is asked to decide on an author for; and ‘check texts’, which are the ones that the program knows are by certain authors.

  • Jay Over test text: episode from “Slave of the Clock”, typed by me
  • Jay Over check text: episode from “The Secret of Angel Smith”, typed by me
  • Alan Davidson check text: episode from “Fran of the Floods”, typed by Marckie
  • Malcolm Shaw check text: episode from “The Sentinels”, typed by Lorrbot
  • Malcolm Shaw test text 1: episode from “Bella”, typed by Lorrbot
  • Malcolm Shaw test text 2: episode from “Four Faces of Eve”, typed by Lorrbot
  • Pat Mills check text: episode from “Concrete Surfer”, typed by me

With many thanks to all who have sent in scans of stories (Mistyfan in particular) and/or typed episodes (Marckie/Lorrbot/Mistyfan).

Attached below: PDF version of the analysis, hopefully readable

JGAAP analysis PDF

Can a computer program help us identify unknown writers? 2

The jury’s still out at present, but I am very grateful for the kind offers of help of various sorts! I now have text versions of episodes of:

  • “Slave of the Clock” and “The Secret of Angel Smith” (Jay Over)
  • “The Sentinels” (Malcolm Shaw)
  • “Fran of the Floods” (Alan Davidson)
  • “Concrete Surfer (Pat Mills).

So I have enough to try to see if I can get the program to identify “Slave of the Clock” as being by Jay Over rather than any of the other writers. If anyone is able to send in any more texts, the following would be useful:

  • Some texts by female writers such as Anne Digby, Alison Christie, Benita Brown
  • Some more texts by the writers named above, so that I can offer the program a wider base of texts per each writer (rather than keeping on increasing the number of individual authors)

How far have I got so far? Not that far yet, I’m afraid to say. I have downloaded a copy of the program I chose (JGAAP) and I’ve got it to run (not bad in itself as this is not a commercial piece of software with the latest user-friendly features). I’ve loaded up the known authors and the test text (Slave of the Clock). However, the checks that the program gives you as options are very academic, and hard for me to understand as it’s not an area I’ve ever studied. (Binned naming times, analysed by Mahalanobis distance? What the what??) Frankly, I am stabbing at options like a monkey and seeing what I get.

I can however already see that some of the kinds of checks that the program offers are plausibly going to work, so I am optimistic that we may get something useful out of this experiment. These more successful tests involve breaking down the texts into various smaller elements like individual words, or small groups of words, or the initial words of each sentence, or by tagging the text to indicate what parts of speech are used. The idea is that this should give the program some patterns to use and match the ‘test’ text against, and this does seem to be bearing fruit so far.

So, an interim progress report – nothing very definite yet but some positive hints. I will continue working through the options that the program offers, to see if I can narrow down the various analytical checks to a subset that look like they are successfully identifying the author as Jay Over. I will then run another series of tests with a new Jay Over file – I’ll type up an episode of “The Lonely Ballerina” to do that, unless anyone else has kindly done it before me 🙂 – scans from an episode are shown below, just in case! That will be a good test to see if the chosen analytical checks do the job that I hope they will…

Jay Over, Lonely Ballerina pg 1

Jay Over, Lonely Ballerina pg 2
click thru
Jay Over, Lonely Ballerina pg 3
click thru

Can a computer program help us identify unknown writers?

I don’t know yet, but I’m going to give it a go.

And I’ll need a little help from others, please.

I have been thinking about the problem of unknown writers and how we can try to identify them. In writing story posts here, Mistyfan and I sometimes raise questions about whether such and such a writer might have also written such and so other story, based on things like similar plot lines and the like. But there is a whole area of research into using computers in the Humanities, and a specific technique designed to help you attribute authorship to unknown writers: it’s called Stylometry. I want to try to use one of the pieces of software that does this – JGAAP – to see if we can get any help in thinking about who might have written what, or at  least in some cases. (Edited to add: this is written by the chap who did the analysis that strongly suggested that J K Rowling was the author of “The Cuckoo’s Egg”.)

The way it works is that I need to feed the program a number of texts from Known Authors, because it then compares the unknown writing with those known samples. (All it can ever do is say ‘this piece looks most likely to have been written by Author A out of the list of A – Z that you have given me’ – it’s just matching a sample to a known finite list, so it has limitations.) That means I need some text files (as many as possible) which are typed-up versions of stories where we already know the authors, such as the below:

  • Jay Over, Slave of the Clock / The Secret of Angel Smith / The Lonely Ballerina from Tammy 1982 and 19833
    • I can do the first two but haven’t got any copies of The Lonely Ballerina
  • Alison Christie – see list on the interview post
  • Pat Mills, various stories including Moonchild in Misty and Concrete Surfer in Jinty
    • I am in the middle of typing up the episode of Concrete Surfer included in the post about this story
  • Alan Davidson, Fran of the Floods / The Valley of Shining Mist / Gwen’s Stolen Glory
  • Malcolm Shaw, The Robot Who Cried

Can any one help by typing up one or more episodes from the stories mentioned, and sending them to me? I’m working out a standard format to use, because it’s going to be important to be consistent about things like how to indicate thought balloons or the text boxes at the beginning of each episode. We can work that out further together of course. Very many thanks in advance!

Once I have enough example files to start running them through the program, this is what I am intending to try (any comments or suggestions will be received with interest).

  1. Can I get the program to work at all?
    • If I load a credited Jay Over text as a Known Author, and a Pat Mills story likewise as a Known Author, will an episode of “Slave of the Clock” be successfully identified as a Jay Over story?
  2. What if I then compare a credited “Pam of Pond Hill” story – will the program identify this as a Jay Over story, or will the comedy style mean it is not as recognisable to the program?
  3. What if I then compare an uncredited “Pam” story with a credited “Pam” story? We think all the Pam stories were written by Jay Over but could this program show us any other views?
  4. What if I then add in more Known Authors and re-run the tests above – will the results still come out the same?
  5. And then excitingly I could try some further tests, like:
    • If I compare an episode of “Prisoner of the Bell” to “Slave of the Clock”, does the former look like the known Jay Over texts?
    • If I compare an episode of “E. T. Estate” by Jake Adams to the uncredited story “The Human Zoo”, what does the program indicate about any plausible attribution?
    • We think Benita Brown probably wrote “Spirit of the Lake” – is there any textual / stylistic similarity we can find between this and “Tomorrow Town” that we know she wrote?

Of course no stylistic attribution program is going to replace a statement from a creator or a source from the time, but we know these are thin on the ground and getting thinner, and what’s more people’s memories and records are getting more fragmentary as time goes by, so this seems worth trying. I don’t expect anything to happen very quickly on this because it does mean quite a bit of typing to get a good body of texts. If anyone is able to help on the typing front then I will be very grateful and hopefully will then be able to show any results sooner rather than later.

Apologies, I had meant to say something about the format of the text. I have a sample document which hopefully can be viewed via this link. In case that doesn’t work, this is what I mean for it to look like:

text grab

But I can add in extra detail such as the description that the text appeared in a word balloon, if I have a scan of the pages in question.

The Bechdel Test and Beyond – Part IV, Boys Comics

It looks like girls’ comics portray a very wide range of roles for girls and women – perhaps wider than is the case in some recent mainstream media targeted at a girls’ market – and that boys and men are integrated into that world of girls as well. The roles that boys and men are shown as playing are not as wide-ranging as those that girls and women are shown in, but compared to the representation of other groups such as Black and Minority Ethnic characters, boys and men get much more of a look-in. What about boys’ comics? What sort of representation do they have of girls and women? Do boys and men get as wide a range of representation in the comics targeted at them as a market?

A couple of notes of caution before I go further. One obvious one is about the scope of what I am looking at in this post – I don’t own very many boys’ comics at all and so I am restricted in what I can easily put through this test. I am of course very happy to hear comments to expand on or to refute the results I will go through below. If readers want to apply the tests to a wider range of boys’ comics I would be very happy to hear the results, and to put then in a followup post if people would like. For instance, I could imagine that repeating this analysis on 2000AD – perhaps on the same sort of date range as my examples below – could reveal an interesting set of different results. (Edited to add: the latest post on blog Great News For All Readers gives a clear hint that Valiant and Lion in 1975 would almost certainly have given different results even to the following year’s copy of Battle and Valiant that I look at below.)

The second, perhaps more major, caveat is to reiterate something I said in earlier posts: this test says nothing about the quality of the individual story or comic, and was never intended to. A story can be great entertainment, excellently written, touching and humane, exciting and innovative, while dealing with a very small subset of humanity or the human concern. The problem comes when only a small subset of humanity is used as the usual channel for stories – when no one else gets a look-in and it is not even questioned. The overall range of stories that are told ends up narrower, but also a wide range of people – those not included in that selected small subset – are implicitly told they are not the stuff of stories. To repeat again, please do not take the comments below as a negative judgment of the comics I am looking at – I can see the stories are exciting and well crafted. That’s exactly why it’s important to apply an analytic test that doesn’t talk about the way the story makes the reader feel or whether it’s well-done – to look at an aspect of the story or comic that can otherwise get hidden by those subjective judgments.

Rounded Representation 5

I was able to look at: four issues of Battle Action dated between March and April 1980, and one issue of Battle and Valiant dated December 1976, all five of which were published by IPC. I have grouped these under the heading ‘War comics’ above, coloured green. Not all boys’ comics are about war as a genre, I appreciate, and thankfully I also had an individual issue of a more general comic that featured a wider genre range – DC Thomson’s Spike, dated 26 November 1983. I didn’t look at humour comics (which weren’t marketed in as gender-specific a way) or at the action adventure comics of an earlier age. Other notable omissions are the weekly publications with a more overtly didactic element and an implicit seal of parental approval – Eagle, Look and Learn, and the like.

First of all, do any of the chosen boys’ comics pass the Bechdel test? If not, there is very little chance of them passing the Rounded Representation test because they are unlikely to have any sort of range of female characters depicted. It’s perhaps not that unexpected that none of the war comics I looked at had as many as two named female characters – there was one story in Battle and Valiant (“The Black Crow”) set in a nursing home, that showed three uniformed nurses, but each unnamed. Arguably two of the nurses have a minimal conversation, which I didn’t indicate in the analysis above, but this is about a man (“Mon dieu… Germans!” “It is Major Klaus von Steutsel… head of the Gestapo in Pontville!”). (Most of the issues had absolutely no women depicted at all, so to have turned to a page with as many as three women on it felt quite unusual.)

The copy of Spike also does not have any stories that pass the Bechdel Test: a couple of the stories had mothers mentioned or shown, but again none of the female characters were named. It felt like slightly less of an exclusively male world on show; balanced against that though, the nurses in “The Black Crow” were professionals with roles of their own, rather than generic wives and mothers, so perhaps honours are even.

In any case, it’s easy to see that girls and women are considerably less well represented in these boys’ comics than is the case for boys and men in girls’ comics. Even in publications that include a lot of fantastical stories, girls stories are not set in an exclusively female world; boys and men are given parts that are more than purely token. Not so in (these) boys’ comics. I have therefore not gone through the Rounded Representation test looking at depiction of girls and women in boys’ comics.

What of the roles that boys and men are depicted in, in these publications aimed at them? Now here’s an interesting thing – in terms of representation, boys are actually slightly hard-done-by in their ‘own’ comics, more so than in girls’ comics. That will need a little more teasing apart than the simple ticks in the various cells above show, however (which is also often the case with the Bechdel Test). So, let’s look at the Rounded Representation test as applied to male characters in boys’ comics.

  • Emotions: in both the war comics and the individual issue of Spike it was possible to find depictions of the range of emotions. I must say though that it was a lot easier in the single issue of Spike; in the war comics it took me looking through most of the pages before I was able to find much in the way of happiness. There was a lot of fear and doubt, and friendship wasn’t hard to find (D-Day Dawson ready to sacrifice himself for his buddies, Jimmy Miller trying his hardest to win Machine Gun Cooley’s friendship). Happiness, and even anger, weren’t anything like as prevalent though. I felt like the tone was a fairly steady and grim one: not many highs and lows of emotion overall, other than perhaps fear in particular.
  • Abilities: Again the individual issue of Spike has a wider range shown of abilities – the war comics stuck to a fairly realistic representations of physical and mental feats. Spike included a story about a footballer, a Conan-type warrior, and the immortal “Wilson, Maker of Champions”, who was clearly especially clever to boot. (I wasn’t however quite sure on that brief sample that I could call him superhumanly so, hence the question mark for that cell.)
  • Challenges: as mentioned earlier, the war comics I looked at are pretty focused on realism so there are no fantastical challenges faced by the protagonists. And while World War II is clearly a society-wide threat if ever there was one, I didn’t feel that the protagonists’ roles in these stories were really about trying to stop the whole war, they were much more specific than that. There were of course plenty of threats in the war comics, driving the story along; fewer positive goals, but I counted Jimmy Miller’s quest to win Machine Gun Cooley’s friendship as such. Spike includes fantasy and realism, individual challenges and wider-spread ones, and a few positive goals as well as external threats (a group of inner-city kids work hard to start a City Farm, and Wilson has a visitor who wants to be made into a champion decathlon athlete).
  • Ages: neither the war comics nor Spike show any very young children – baby brothers or suchlike. Not very surprising in a war comic, but there were no families escaping the horrors of war or similar – the focus was pretty narrowly on the soldiers themselves, hence on young adults and grown ups. The story in Battle and Valiant mentioned above, “The Black Crow”, features old men in a nursing home, and expands the range of ages noticeably. Spike, once again, is wider in its range than the war comics and ticks most of the boxes fairly comfortably.
  • Roles: so few of the characters are female that there is hardly any way that these comics couldn’t have featured men and boys as all of the range looked at: protagonists, sidekicks, villains, and background characters.

Overall, the Rounded Representation test looks like it shows a pretty wide representation of male characters in these boys’ comics, though some of the ticks would have ended up as blank cells if only one or two individual issues had been examined. Certainly some of the result is about genre, with (these) war comics likely to focus on young adults and grown men in a realistic setting, facing individualistic challenges in an overall story tone of fear and anger, with little happiness depicted. Of course in principle war comics could work differently – “Rogue Trooper” is a war story set in a science fiction milieu with an overarching threat to the whole of the world and positive goals based in comradeship as well as threats from external forces. (There’s even at least one named woman in it, though whether as a whole it passes the Bechdel Test I am not sure.) Overall however it is pretty clear that whereas girl readers had their stories set in a world which represented them in a rounded way as people endowed with possibilities both good and bad, boy readers were given more circumscribed stories with a narrower set of options – and very little room indeed for their sisters, mothers, and fierce warrior Leelas.

The Bechdel Test and Beyond – Part III

So, in the last two posts on this blog I introduced a new Rounded Representation test that takes us beyond the starting point that is the Bechdel Test, and gave various examples of its use.

In the first post:

  • We saw that girls comics of the 1970s had very fully rounded representation of the female characters in their pages; even in a single issue of one girls comic (chosen primarily for easy accessibility) there was female representation of a wide range of emotions, abilities, challenges faced, ages, and roles.
  • In comparison, other groups of readers are not likely to be represented anything like as fully. The same test done for BME (Black / Minority Ethnic) characters results in a very much patchier picture of representation. Across the whole run of a single title, there are some significant gaps in representation, and in a single issue of a title, there is very little guarantee of representation of this group, despite the net being cast as widely as possible (by testing for any BME representation rather than specifically Asian or Black British representation, for instance).

In the second post:

  • We saw that recent stories targetted at girls (a My Little Pony feature film, a Barbie doll webcast, and the Tangled film from Disney) also generally showed a fairly fullly rounded representation of the female characters, though the representation of girls and women in the Barbie webclip was noticeably patchier than was the case for the other two.
  • Just because something is targetted at a female audience, it is not necessarily the case that the representation of female characters will be fully rounded.

In this post, we will look at the representation of male characters in comics aimed at girls, and in the next post we’ll look at the same in comics targetted at boys. Do girls’ comics only show us female characters – an almost absolute reversal of the way that mainstream media is dominated by male characters? Or do they give readers a rounded representation of both genders? Likewise in comics intended for a male market – how do they represent both the gender that they are targetting, and the other half of the world?

First of all, what happens when we do a ‘reverse Bechdel’ on girls’ comics – checking to see if there are at least two named male characters who interact with each other? There are a one or two stories in Tammy and in Jinty which have male protagonists, and if any stories pass this reverse Bechdel then they will. The Tammy story “Cuckoo In The Nest” is a particularly good example of such (see this Booksmonthly article for a synopsis halfway down the page). This story passes without many worries – although protagonist Leslie is forced to attend a girls’ boarding school in disguise as a girl and therefore mostly interacts with ‘other girls’, he also meets up with his friend from home, talks to his Uncle Fred, and even finds a local group of boys he can play football with when he escapes from his female disguise.

The story also covers most of the bases on the Rounded Representation test: the male characters are shown with a range of emotions (I didn’t have the whole story to hand and didn’t see much anger depicted, but I may have missed this through not looking at all the episodes). It’s a fairly realistic story, or at least not a story of magic or science fiction, so the male characters don’t show any superhuman abilities, but we see Leslie playing football and solving various problems (such as how to fool his schoolgirl chums into continuing to think he is a girl). The story is really based around the fairly individualistic challenge for him not to get caught out, though there are also some positive goals he is trying to achieve (such as continuing to enjoy himself by playing football well). We don’t see that wide a range of ages in the male characters shown – no little boys or old men in the episodes I looked at, but they may be included in later episodes so I have put question marks here. And of course Leslie in this case is clearly the protagonist, but the villains or antagonists are all female (his Great-Aunt, and a nosey schoolgirl who has to be prevented from finding out his secret). The sidekick in the story is a schoolgirl chum who has her own reasons for being on his side. We might perhaps count his Uncle Fred as a sidekick but I am more inclined to categorize him as a background character – happy to hear arguments on this though.

It’s also helpful to check an individual issue of a girls’ comic that wasn’t specially chosen as likely to pass, so let’s go back to the 1978 issue of Misty that was referred to in the first of these posts and do the same tests. This does pass the reverse Bechdel test, though only once you get over half way through the issue: in the complete story “The Love and the Laughter” the devil has a short conversation with two named male characters about a book, and in “The Sentinels” there are a few conversations between policemen.

As for the Rounded Representation test on this issue of Misty, it passes most of the hurdles relatively easily:

  • The male characters are shown with a wide variety of emotions (for instance the fathers in “Seal Song” and in “Paint It Black” are both happy, though not in ways that are likely to bode well for their respective daughters).
  • They show a range of abilities both physical and mental, realistic and supernatural (I’m not totally convinced that the devil in the Carnival story can be said to be using more than human mental powers, hence the question mark in that cell).
  • There are a range of challenges faced by the male characters, whether individual or more widespread (in “The Sentinels”, the father is part of a resistance group fighting the Nazis, which definitely counts). It’s not so clear as to whether any of the male characters in this issue have a positive goal they are trying to achieve, so much as threats they are aiming to survive; and of course this is a horror comic so most of the challenges that all the characters face are more supernatural than mundane. (The protagonist of “Moonchild” faces the mundane challenges of an abusive mother and some horrible bullies, but she is a female character and hence does not come into this specific test.)
  • We see a reasonable range of ages in the male characters depicted – no babies or toddlers at all whether girls or boys, but plenty of grown men, a boy of a simlar age to the protagonist in the background of the end of the seal story, one young adult in the Dragon story, and old men in the carnival story.
  • None of the male characters are given the role of protagonist in this issue but we do see men and boys as villains, background characters, and as ‘sidekicks’ – important characters who are not the main protagonist.

For completeness I have also scored the Rounded Representation test for Jinty as a whole; there are few male protagonists (but at least one) and none of the male characters I can immediately think of show superhuman physical abilities, though some of them can certainly do magic. I would also say that none of the male characters face widespread societal challenges, though again I am open to examples being sent in. (Perhaps little brother Per in “The Song of the Fir Tree”, as him and his sister escape Nazi persecution across the breadth of Europe.)

So we can see that in comics aimed at girls, the roles available to male characters were very nearly as wide as the roles available to female characters – there were very few male protagonists and perhaps some other gaps in what they were shown doing, but overall boys and men very much formed part of the world depicted in girls’ comics. Is the same the case in boys’ comics – did they show an equally wide range of female roles? Did they show a full range of male roles? The next post will tell more.

The Bechdel Test and Beyond – part II

Here are some more try outs of the new Rounded Representation test that I have devised. In explaining the results below it should hopefully make it clearer to readers how the test is supposed to work. This time, I have chosen three modern mainstream stories that are targeted at an audience of girls: the recent feature film based on My Little Pony, “Equestria Girls: Rainbow Rocks“; one sample episode of the Barbie webcast show “Life In The Dreamhouse“; and the 2010 Disney feature film “Tangled“.

Rounded Representation 2

Range of emotions shown:

  • The “My Little Pony” film (hereafter MLP) certainly covers all the bases on my test.
  • Barbie is a generally happy character and the short episode in question includes friends who collaborate with each other; I wouldn’t say there’s much depiction of anger or fear in that episode though I am being a bit unfair as I don’t like the show much and so haven’t watched much of it. In other clips, Barbie’s rival probably does show anger and the characters probably do show some fear at points, but this is not a series that has a lot of emotional highs & lows.
  • “Tangled” again has a pretty happy main character and a female villain who is quite angry at times; Rapunzel also has a lot to learn about the world so she has moments of fear and doubt but eventually wins through to the love of her family and to romantic love too.

Range of abilities shown:

  • In MLP, the main female characters play musical instruments (showing physical abilities that are things that real people might do), and they solve problems and therefore display mental abilities that match things that real people might do. They also can do astounding leaps into the air (showing physical powers that are more superhuman than realistic) and can do magic (thus mental superhuman abilities).
  • Barbie and her friends show realistic and more-than-realistic physical prowess but don’t really solve any intellectual challenges or show mental powers beyond the norm.
  • Rapunzel does quite a lot of physical stuff – running, jumping, hitting people with frying pans – and some of it is more-than-human (tying people up with her hair). Both she and her female antagonist can do magic, and they both have to think hard in order to solve problems too.

Range of challenges faced:

  • In MLP, there’s a lot of stuff going on in the one film. One character has to redeem herself in the eyes of her friends and her school (individual challenge), but the group as a whole have to save the world from the three magical sirens who are threatening it (societal challenge). There are threats to the well-being of the main characters but also goals that they are trying to reach (playing together really well as a rock band, having their song-writing skills appreciated). Some of these challenges are real-world ones that any viewer might also face, but saving the world from mystical forces definitely comes under the category of the fantastical.
  • Barbie and friends have a fairly fantastical (not to mention silly) challenge to face but it only affects them as individuals, and it’s more of a personal goal than a threat to them
  • Rapunzel doesn’t need to save the world but she does have both personal goals to fulfill and a very real threat to defeat. Arguably some of her challenges are ones that a real viewer might face – finding friends, finding love, getting back to her family – but it’s a bit tenuous and I would say the story is fairly firmly based in the fantastical challenges of defeating her magical enemy.

Range of ages shown:

  • MLP is set in a high school and it does have a restricted group of ages as a result. The characters are young, perhaps teenagers rather than tweens, but you don’t really see any female babies or children, or any old women. (You do in the TV series of MLP, however.) There are some female grownups (teachers) who have a minor role in the film but are present. (NB I am counting ‘young adult’ as being around the 18 – 25 range – treated as an adult for many purposes in society, but not expected to have a family of their own or necessarily to have embarked on a career.)
  • Barbie only really seems to include young adults and grownups (I am categorizing Barbie as a grownup because the episode I watched had her talking about her many careers, and referring to Ken as her boyfriend of many years). Other episodes in the series do include Barbie’s younger sisters so would score slightly more widely.
  • Tangled does show a pretty wide range of female characters at different ages.

Range of roles shown:

  • MLP – the many female characters cover the wide range of possibilities, as heroes, villains, sidekicks, and background cast.
  • Barbie doesn’t really seem to have any antagonists in this episode (in others she does have a rival, Raquelle) but nor are any background characters shown, whether female or male.
  • And in Tangled there is a notable gap in that there are no female sidekicks (something that has been noticed recently elsewhere).

I will get back to British comics in the next post, promise ! Hopefully the above gives an easy-to-follow explanation of what the various categories meant in my test, and why they might be ticked or left blank. I think it also shows that just because something is targetted at a female audience, it does not necessarily cover a diverse range of representation possibilities.

The Bechdel Test and Beyond

I have been trying to come up with a good way of looking at the characters in girls’ comics (and boys comics too), to help me think about diversity and representation in a structured, repeatable fashion. Hopefully that structure could also be used on other ranges of comics, to compare and contrast.

The Bechdel Test, which you may well have heard of previously, has become a fairly well-known way to check whether a story passes a pretty basic test of representation. It works quite interestingly in the context whereby the vast majority of stories told are by, about, and for men, in that it highlights those stories which have at least a bare minimum of female representation in them (to pass, they have to include at least two named female characters, who talk to each other at some point, and who don’t just talk about a man). It’s a starting point for analysis, not a tick that says the story is a great feminist achievement. But in a genre which is intrinsically focused around girls, this test becomes fairly meaningless. I could imagine a Jinty story which doesn’t pass the Bechdel Test (though the vast majority of them would do), but it would fail it in a different way from the male-dominated Hollywood stories that the test was designed for. The first rule of doing comics for girls is clearly that the main character must (almost always) be a girl: so stories in this genre will almost certainly only fail the test if the main character is the only character and therefore had no one else to speak to.

So there is little point in trying to measure female representation in girls’ comics by using the Bechdel Test (you could measure the lack of male representation in them by doing a reverse Bechdel Test but I’m not sure that this would tell you very much more). We need something with a higher bar than testing for the mere existence of female characters and their minimal interaction together. I propose a ‘Rounded Representation’ test, therefore: looking at the range of portrayals of female characters in the stories under analysis. OK, so girls’ comics are focused around girls, duh. But do they still stereotype girls and limit the ways they are represented, or do they allow their female characters to represent a much wider range?

I have chosen a few initial attributes to look at, and made some initial scores off the top of my head. The yellow items below are my generalised scores for Jinty across its run; the items in blue are scored with reference to a specific issue of Misty that I could easily access at the time of writing (April 1978 – available online). Does Jinty include stories where the female characters show the basic emotions listed below? Do the characters have a range of things they are shown as doing, whether realistic or not (sports, feats of superhuman strength, doing well in school, reading people’s minds)? Do the stories show the girls facing a wide range of different kinds of challenges, in a range of roles both positive and negative? And do you see only young and pretty girls represented, or are they shown as people who feature in stories across the spectrum of ages? If you are looking at the whole run of Jinty then yes, you see pretty much the whole gamut; and even if you only look at one specific issue of one comic targeted at girls (the Misty example in blue) then again, yes, even in one issue you see a pretty wide range of representation of the female condition.

Rounded Representation test

So what, you might say – surely it’s almost a dead cert that across a whole run of several years you will get the range of possibilities used. Well, let’s try that analysis again, but this time with BME (Black and Minority Ethnic) characters.

Rounded Representation 2

Even though we are talking about some 350 issues / 7 years worth of comics, the yellow items show right away that the range of depictions of BME characters is massively much thinner than that of female characters*. BME characters aren’t (in Jinty) shown as saving the whole world or depicted in fantastical situations – they are shown in more mundane situations where the challenge they face is about them individually. They aren’t shown in whole families or as ‘people in the crowd’ (actually I need to amend this a little, because in “Life’s a Ball for Nadine” we do see Nadine’s parents). If you are going to include fewer BME characters in the first place then it’s unsurprising that there are fewer roles given to them, but I suspect the gaps also highlight some tokenistic thinking too. Perhaps the gaps imply that it’s reasonable to have a story or two that are specifically about a Black British girl or a Chinese girl, or indeed to make her a villain; but to include BME characters as part of the expected background pattern of life is too much to expect?

* I am happy to explain my scoring in more detail if anyone asks in the comments; some of the elements may well need revising as it was a fairly hasty assessment. Apologies also for assessing at the pretty crude level of ‘BME characters’ which is itself a loaded choice, I know.

The blue items from my fairly brief analysis of an issue of Misty highlight further the fact that there is just so much less inclusion of non-white people in this era of comics. In one of the stories there is a sinister Chinese man who smiles happily and is clearly a villain – single-handedly he accounts for 3 of the 5 attributes ticked. This issue also includes “The Cult of the Cat” and I have slightly generously included Bast’s priestesses young and old, in the background, to account for the remaining two attributes ticked.

It’s immediately obvious when reading girls’ comics that the majority of the characters in them are female: that means that these comics have a great chance to represent a wide range of human possibilities in the shape of those female characters. Girls’ comics may not be bastions of feminism but just the fact that they show girls and women as main characters, villains, and sidekicks – and shows them as schemers, bullies, and heroes as well as paragons of virtue – means that the girl reader sees lots of ways of being, not a single simple straitjacket. The above gives us a way to show this range of ways of being: a method that can be applied in other cases too. We can ask whether this range of representation is made available in cases of other disadvantaged groups (the answer above being, probably not).

We can also ask whether other girl-focused stories show the same range of representation. I’ve watched a few episodes of Barbie’s “Life In the Dreamhouse” and while I am not going to do a full analysis of that show, I would score it as probably lower than the Jinty or Misty scores above – do you ever see old people on it or only beautiful young people? Does the protagonist ever face a widespread societal challenge? I don’t think so (but could be proved wrong by a more assiduous viewer). Compare that to “My Little Pony”, also targeted at a young female audience – the scores for female representation are likely to be much more akin to the Jinty scores, I’d hazard.

Now I need to apply the same analysis to girls in boys’ comics – and to boys in girls’ comics!

Health warning – as with any fairly basic analysis, there is lots and lots omitted in the interest of simplicity. There could be a lot more emotions included, for a start – such as guilt or envy – and this analysis certainly says nothing about whether any individual character is a thin cardboard cutout. It just says whether, in this genre, girls and women are allowed a range of slots in the story rather than always being shown doing the same thing in the same way – always the love interest and never the hero.

Edited to add – this is the 400th post on this blog! Very suitable to have this sort of thinky analytic piece on such an auspicious number. Many thanks all for reading the blog.

Story length through Jinty’s life

I have created a new page listing the stories in Jinty by publication date. This seemed like an interesting and useful addition to the list of stories in alphabetical order that has been in place on the blog since we started. As part of the information on that new page it seemed sensible to count the number of episodes for each story, too (where possible) – luckily for me, the Catawiki data that I was using to compile this information gave me the ability to include that for almost all stories. As I put together the list, I got the impression that in the last year of Jinty‘s publication, the story length was getting shorter and shorter: so I pulled together some stats on it.

For each year below, there are some stories I excluded from the statistics, either because I didn’t have a complete count of all the episodes (for instance where a story had started in Lindy or Penny before their merger with Jinty), or because they were by their nature long-running humour strips with no specific start or end point. I’ll give a list of the excluded stories and their running lengths further down this post.

  • For 1974, the mean story length is just under 16 episodes and the mode (most usual) story length is 13 episodes
  • For 1975, the mean is just under 18 episodes and the mode is 16 episodes
  • For 1976, the mean is just under 15 episodes and the mode is 19 episodes
  • For 1977, the mean is just over 14 episodes and the mode is 11
  • For 1978, the mean is just over 16 episodes and the mode is 18
  • For 1979, the mean is just over 14 episodes and the mode is 12
  • For 1980, the mean is 11.5 episodes and the mode is 12
  • For 1981, the mean is 11 episodes and the mode is 10

We can see that the two averages do go up and down over the run of Jinty. Having said that, the drop-off in episode length in 1980 and 1981 does look like a real change, despite that context of background variation. (I’m not going to do any full-on statistical analysis with standard deviations and so on though!) Both average figures are down in those two years, because there are fewer long stories pushing up the mean as well as a general trend to the slightly shorter length of 10 – 12 episodes.

Which stories did I exclude from the analytics, and why?

  • The humour strips with no specific story arc: “Dora Dogsbody” (94 episodes), “Do-it-Yourself Dot” (62 episodes), “The Jinx From St Jonah’s” (112 episodes), “The Snobs and the Scruffs” (12 episodes), “Desert Island Daisy” (9 episodes), “Bird-Girl Brenda” (27 episodes), “The Hostess with the Mostess” (19 episodes), “Bet Gets The Bird!” (11 episodes), “Alley Cat” (163 episodes), “Sue’s Fantastic Fun-Bag!” (111 episodes), “Bizzie Bet and the Easies” (27 episodes), “Gaye’s Gloomy Ghost” (96 episodes).
  • “Merry at Misery House” (66 episodes) is not a humour strip but like those above, it has no specific overall story arc, no obvious beginning or end that is worked towards throughout its run. I have therefore excluded that too. The same goes for “Pam of Pond Hill” which ran to a mighty 126 episodes in Jinty and then on into Tammy of course.
  • The stories that I have incomplete episode information about: “Finleg the Fox”, “Penny Crayon”, “Hettie High-and-Mighty”, “Gypsy Rose” (these stories are not catalogued on Catawiki as a group), “Rinty n Jinty”, “Seulah the Seal”, “Tansy of Jubilee Street”, and “Snoopa”. Various of those would be excluded even if I had complete episode numbers, of course.
    • Edited to add: further information has been given in the comments below. “Finleg” and “Hettie” ran for 7 episodes in Lindy, and “Tansy” ran for 45 episodes in Penny. “Seulah” ran for 11 episodes in Penny, and then started a new story in Jinty & Penny, which I hadn’t really realised. The two Seulah stories were more like separate arcs in a bigger story than self-contained stories in themselves. Many thanks to Marc for this information! I will add them into the spreadsheet and see if it makes any difference to the years in question.
    • “Snoopa” ran for 45 episodes in Penny, which Mistyfan confirms below (many thanks). As a gag strip, this would not be included in the year-on-year statistics in any case.

Longest run of an individual story? “Alley Cat” has all the others beat, at 163 episodes; runners-up are “Pam of Pond Hill” at 126 episodes, and then “The Jinx From St Jonah’s” and “Sue’s Fantastic Fun-Bag!” neck and neck at 112 and 111 episodes respectively. However, if you exclude these and look at the length of the ‘normal’ stories, then the top three are “Somewhere Over the Rainbow” (36 episodes), “Fran of the Floods” (35 episodes), and “Always Together…” (29 episodes). (Phil Townsend does particularly well for long-running stories, as “Daddy’s Darling” clocks in at 24 episodes and “Song of the Fir Tree” at 22 episodes.)

At the other end of things are some short stories. There are only two single-episode stories: “Holly and the Ivy” and “Mimi Seeks a Mistress”. “Freda’s Fortune” is the only two episode story. “Mimi” was a reprinted story, printed towards the end of 1980; possibly “Holly” and “Freda” were intended for publication in annuals or summer specials and then used as filler.

There are a few 3 or 4 episode stories: “The Birds”, “The Changeling”, “Casey, Come Back!”, and “The Tale of the Panto Cat”. This is also an odd length for a story – long enough to allow for a bit of development, but short enough to feel a bit abruptly cut off when you get to the end. Of these four, I’d say that “The Birds” is the one I find uses its length most successfully, though “Panto” works pretty well as a seasonal short. The slightly-longer “Her Guardian Angel” (5 episodes) likewise uses its length reasonably well to give us a seasonal amusement.  Some other shorter stories, such as “Badgered Belinda” (7 episodes), do read like they have probably been cut down from an originally-intended standard length of 10 – 12 episodes.

The spreadsheet with this information is available on request – please comment and I will be happy to email it to you if you want.