Earlier this month, kids social network Habbo grabbed the headlines when Channel 4 reported on the sexual predators that allegedly stalk the site’s 265m users.
“Within two minutes I was being asked ‘do you have a webcam?’, ‘can we chat on MSN or Skype?’,” said Channel 4 News producer Rachel Seifert.
“I was also, within a couple of minutes, asked to strip, fully naked, and asked what would I do on a webcam.”
The outcry was predictable, but understandable.
Habbo did its best to lay people’s fears to rest. It detailed the protective measures that it uses, muted all public conversations and began its own investigations into user behaviour.
But it wasn’t enough to prevent further reports throughout the mainstream press, or GAME, WH Smith and Tesco from pulling the site’s prepaid content cards.
The incident raised several questions about child safety online. While Habbo is in essence a chat room, its game-like interface draws attention to our industry.
UKIE CEO Dr Jo Twist said: “The games industry generally has a good track record in having measures in place to protect children.
“We have PEGI, parental controls on all consoles and most online games have robust and rigorously enforced policies to reduce the risk of children accessing inappropriate content.
“The Habbo case has shown very clearly how important it is to make sure your systems are checked and tested regularly.”
In his statement last week, Habbo CEO Paul LaFontaine said: “We filter content and block inappropriate users. We also employ more than 225 moderators, tracking some 70m lines of conversation globally every day on a 24/7 basis.”
But the question this raises is not ‘why has Habbo failed to protect users?’ – instead we should be asking ‘what more can be done?’
A Disney spokesperson told MCV that it has gone beyond typical filters to protect users of its popular Club Penguin site.
The site’s Safe Chat only allows kids to use words that are on the approved ‘white list’ – a different approach to the usual ‘black list’ of banned words. Meanwhile, Ultimate Safe Chat gives users a set list of phrases and emotes that can be used, guaranteeing no unapproved messages.
Even Nintendo has promised to moderate all messages sent through Wii U’s Miiverse service.
But Crisp Thinking – a company that develops monitoring and profiling software for the likes of Sony, Ubisoft, EA and Moshi Monsters – warns that no method of protection is completely watertight.
“Filtering can still only go so far,” explained CEO Adam Hildreth. “When users communicate, it is inevitable that things will always get through filters. In some of the most serious incidents involving online bullies and sex predators, so-called bad language isn’t used at all.
“And the sheer number of messages means lots of issues will get missed and it’s just not possible for a company to hire thousands of moderators.”
SPREAD THE WORD
Instead, the greatest tool in our arsenal is education. Parents need to be made aware of what their children do on these sites, and how they are protected.
One ace up the games industry’s sleeve is PEGI, which will be made legally enforceable next month and UKIE’s six-month campaign will teach consumers more.
“It is vital that we as an industry help parents understand the social spaces their kids use,” said Twist.
In the meantime, it is up to publishers and social network owners to remain ever vigilant against those who would abuse the internet, without stifling the freedom and experiences it allows the games industry to offer consumers.
There are currently 1 users browsing this thread. (0 members and 1 guests)