Julie Inman Grant, eSafety Commissioner of Australia, to speak at upcoming Responsible Tech Summit in NYC

All Tech Is Human is thrilled to announce that Julie Inman Grant, eSafety Commissioner of Australia, will be speaking at its upcoming Responsible Tech Summit to be held on Friday, May 20th in NYC. This gathering for 120 leaders focused on improving digital spaces will be held at the Consulate General of Canada in New York, our event partner. Inman-Grant has been a pioneering global leader in pushing forward better digital spaces with the Safety by Design framework, a recently-announced youth panel, and more.

The Responsible Tech Summit on May 20th seeks to unite a diverse range of stakeholders to build on each other’s work and co-create a tech future aligned with the public interest. All-day event (9am to 4:30pm) with panels, fireside chats, and plenty of networking and collaboration. Talks will be livestreamed for a global audience. Our event partner for May 20th is the Consulate General of Canada in New York. This gathering will be in person, while the talks on stage will be livestreamed for a global audience. To find out more about the upcoming Responsible Tech Summit, read here.

All Tech Is Human specializes in bringing together a diverse range of stakeholders to tackle thorny tech & society issues. Previous summits, livestreams, and reports our organization have featured individuals from Aspen Institute, Berkman Klein Center, World Economic Forum, Data & Society, Mozilla, IEEE, DataKind, Center for Humane Technology, IBM, Salesforce, New_Public, Deloitte, Accenture, the New York Times, Avanade, Facebook, Microsoft, Twitter, TikTok, Discord, Sesame Workshop, Consumer Reports, Google, the FCC, Hulu, Roblox, Partnership on AI, Web Foundation, Omidyar Network, Tony Blair Institute, and many more. All Tech Is Human held a virtual Responsible Tech Summit on September 15, 2020 that drew over 1200 registered attendees across 60 countries. Pre-Covid, the organization held summits in NYC, San Francisco, and Seattle. It’s inaugural summit was held in NYC in the Fall of 2018.

Two of our recent reports dealt specifically with improving digital spaces. Our most recent is called the HX Report: Aligning Our Tech Future With Our Human Experience. All Tech Is Human is a member of the HX Project, alongside organizations such as Aspen Institute, Data & Society, Project Zero, and Headstream, to have an “approach to talking about, engaging with, and designing technology in a way that is aligned with our needs as humans — not users.” In our HX Report we took a holistic approach to improving digital spaces, looking at product design, business models, content moderation, digital citizenship, tech augmentation, and tech & wellbeing.

And previous to the HX Report, our organization released Improving Social Media: The People, Organizations, and Ideas for a Better Tech Future. These two reports featured resources from over 150 organizations doing valuable work in the ecosystem, and included profile interviews with over 80 leaders focused on improving digital spaces.

Julie Inman-Grant was profiled in our Improving Social Media report, which was released in Feb 2020. The interview is below.

Tell us about your current role:

For the past 4+ years, I have served as Australia’s eSafety Commissioner. I started with a staff of 35 and have grown it into a nimble and innovative agency of 115. Established in 2015, the Office of the eSafety Commissioner (eSafety) is an independent online safety regulator and educator whose sole purpose is to ensure that our citizens have safer and more positive experiences online. We are the first regulator of its kind in the world and take a multipronged approach to achieving our goals. We focus on Protection through our reporting schemes and investigations; Prevention – through education programs, awareness raising  and proactive and systemic change – where we try to stay ahead of technology trends and work with industry to encourage them to develop safer online products.

eSafety has a range of civil powers to compel takedown of illegal or harmful content, whether it is child sexual abuse material, pro-terrorist content, image-based abuse (colloquially known as “revenge porn,” but we do not call it that) or serious cyberbullying of a child.

There is pending legislation that would give us additional powers to compel take-down of serious adult cyber abuse and require companies to live up to Basic Online Safety Expectations (“the BOSE”) including the ability to compel transparency reports to reduce opacity in policies, but also to understand how certain issues are being tackled and whether companies are enforcing their policies consistently and fairly.


Tell us about your career path  and how it led you to your work’s focus:

I most definitely didn’t imagine in the 1980s, when I was attending university in Boston, that I would end up becoming a government regulator of the technology industry in the Land Down Under, but my career has been bookmarked by roles in government. My first job interview out of university was at the CIA to analyze the psychology of serial killers, but I ended up taking a role on Capitol Hill with my hometown congressman instead. I was working on a range of social justice issues, but the congressman asked if I would take on the breakup of the Baby Bells and look after technology policy because we had a “small little software company in our district called Microsoft.” So, in 1991, I embarked upon a career that worked at the intersection of technology, safety and policy before there was an Internet. After a stint in the NGO sector and in Brussels, I landed as Microsoft’s second DC lobbyist immediately prior to the US DOJ antitrust case. In this role, I was involved in shaping Section 230 of the Communications Decency Act, helping to organize the first White House Summit on Online Safety during the Clinton Administration and after 5 long years, I moved to Australia to start their corporate affairs function in the region. I developed my speciality in safety, security and privacy in an APAC role and finished my 17-year career at Microsoft as the global head of privacy and safety policy and outreach at Redmond HQ. I had two exciting and eye opening years at Twitter setting up and running their public policy and philanthropy functions in ANZ and Southeast Asia before joining Adobe as their head of government relations across Asia Pacific. Nine  months later this poacher became a game keeper, and I was appointed to serve as eSafety Commissioner of Australia.

In your opinion, what are the biggest issues facing social media?

The failure of corporate leadership to recognize and embrace their tremendous societal impact and the ill effects technology can have on humanity and to actively take responsibility for these hazards.  Had more of these companies prioritised the safety, privacy, security and overall well-being of their users and balanced the imperatives of “profits at all costs” with their responsibility to prevent and protect a range of online harms, they would be in a much better position. If you add to that tremendous market power, the perception of evasion of international taxes and occasional recalcitrance toward governments, the biggest issues facing them will be the force of global governments regulating them in ways that might be both unworkable, inconsistent and detrimental to their future growth. So, to me, this is the biggest threat to the industry.

In terms of the looming threats to users of social media and the industry’s ability to address these threats, the various ways threat actors are weaponizing their platforms to spread child sexual abuse material, pro-terrorist/extremist content and other forms of illegal content: These cause the most harm to society, but there are also a range of technology tools available to tackle these issues, if there were greater will to do so. It is the more “contextual issues” and the forms of harmful content that are not patently illegal and are likely to require more “ecosystem change,” investigation and human moderation that are likely to be more challenging to tackle. This includes issues related to disinformation, organic online harassment brigading (we refer to this as cross-platform volumetric trolling), preventing children’s exposure to porn through age verification and anonymity and identity shielding. These harms all lead to toxicity on platforms and, concerning impacts on humanity and the detection, removal or “solutions” are not necessarily automated. These are much more complex and nuanced.

 What "solutions" to improving social media have you seen suggested or implemented that you are excited about? How do we ensure safety, privacy and freedom of expression all at the same time?

 I served as an “online safety antagonist” within industry for more than two decades, and I could never convince company leadership that addressing “personal harms” should be elevated to the same status as privacy or security. I brought “safety by design” to Microsoft leadership more than a decade ago and, while there was a tacit understanding of the importance of online safety, it was never given the priority, investment or attention that the other disciplines were. Whilst at Twitter, I saw the devastation that targeted online harassment wrought on humanity every single day – and it demoralised me too. The company that I was so excited to join, that stood for the levelling and promotion of voices online that previously weren’t heard, simply wasn’t doing enough to protect those voices, particularly marginalised voices. I could not defend this anymore, so I sadly left a company that I saw as having so much potential to do good in the world.

As eSafety Commissioner, I built an incredible team to work with industry to create a set of “safety by design principles” that were achievable, actionable and meaningful. I understood that this is something we needed to do with industry rather than to industry to be effective, as it will involve changing the ethos of how technology design, development and deployment typically happen. We went “deep” over about eight months to uncover innovation and best practice in this space to elevate examples and ended up with three sets of principles: “Service Provider Responsibility”; “User Empowerment and Autonomy”; and” Transparency and Accountability.” Because we want industry to be successful at assessing risk at the front end and building in safety protections to prevent misuse rather than retrofitting after the damage has been done, we also want companies to be successful at achieving higher levels of safety.  So we decided that we’d take the principles and turn them into a free, interactive assessment tool so that companies could use this as an audit tool of sorts, learn how to address safety weaknesses and have a robust “safety impact assessment” to help them build their roadmap. This tool will be released in a few months – one tool is for start-ups, the other for more mature enterprises. 

Safety by design does not end there. We believe the VC and investment community has an important role to play in ensuring user safety as a way to ensure more ethical investing, managing risk and in preventing “tech wreck moments” – these are preventable. In January 2021, we released an investor toolkit. We’re also piloting safety by design curricula in four universities in Australia. We believe the next generation of coders, designers and engineers should be building technology with ethics, human rights and safety in mind.

By the way, I reject the supposition that privacy, safety and freedom of expression are diametrically opposed or mutually exclusive.  They need to be balanced – and occasionally recalibrated – like four legs of a stool….

When we discuss improving social media, we often toggle between the responsibility of platforms, the role of media to educate the general public, governmental oversight and the role of citizens in terms of literacy and how they engage with platforms. In your opinion, what area do you think needs the most improvement?

They all need improvement and need to work in harmony if we are going to make the online world more hospitable, civil and positive. This balance has informed the way in which I structured eSafety.  Everything we do is evidence-based, so I have an internal research team that delves into qualitative and quantitative measures, and this informs our public messaging education materials and resources. These are designed to reach specific audiences – whether parents, educators, senior citizens or children themselves – with the aim of helping citizens to harness the benefits of technology, understand and mitigate risks with pragmatic and actionable solutions. We are aiming to encourage behavioural change (which takes a long time) and measure that impact through evaluation. We reject purely fear-based messages and also leverage the education sector to help reinforce messages and incident response throughout a child’s educational journey. 

Clearly we believe that government oversight is required to serve as a “safety net” for our citizens when online abuse falls through the cracks of their content moderation systems, to remove harmful content and when necessary use civil penalties to punish perpetrators and fines to content hosts.  While I’d much rather use the carrot, there are times the stick is definitely needed. 

And, as expressed through our commitment to safety by design, we absolutely believe that industry has to do better in making their platforms safer and more secure, and that they need to be both more transparent and accountable for harms that take place on their platforms. They build the online roads; they also need to erect the guardrails, occasionally police those roads for dangerous drivers and enforce the rules so that other users do not end up online roadkill.

What people and organizations do you feel are doing a good job toward improving social media? Why/how would you say their work is helping?

There are so many people doing such great work all around the world, committing to make the world a safer place. We notice that examples of such work often focus on North America and Europe and that very few people outside of the safety community know what we’re doing in Australia. We may be small and far away but we think what we’re doing for our citizens is pretty unique and has impact. 

There are some incredible technologists out there devoting their brain power and careers to making the online world a better place – this includes Dr. Hany Farid, from University of California, Berkeley, and the inventor of PhotoDNA, Christian Berg from Sweden, who has developed tools for law enforcement and several companies like NetClean and Paliscope. There are some really great safety tech companies popping up too, including Spectrum Labs, Sentropy, Hive, Tiny Beans, Family Zone and numerous others.

There are incredible researchers and advocates, particularly all of those affiliated with Global Kids Online, who bring a lot of rigour mixed with a genuine concern for children and human rights, mixed with common sense.  Dr. Sonia Livingstone, Amanda Third and Anne Collier come to mind and I love the work of Sameer Hinduja and Justin Patchin of the Cyberbullying Research Center. They are the real deal!! Some of the female lawyers and academics in the US working on intimate privacy, ethical AI and advocating for women and minorities online are doing ground-breaking work, including Danielle Citron, Mary Anne Franks and Safiya Umoja Noble – they are my she-roes!

I am honoured to work with some amazing human beings through the WeProtect Global Alliance including Baroness Joanna Shields, Ernie Allen, Julie Cordua of Thorn and passionate advocates like John Carr. It is amazing what a bit of compassion, strategy, brains and strong communications skills can do to enable meaningful change! 

Part of our mission at All Tech Is Human is to diversify the people working in the tech industry. In your opinion, what academic or experience backgrounds should be more involved in improving social media?

We need more people in government who understand how technology works and the ethos of the industry. This is what I believe has made me a more effective regulator. Computer science and engineering needs to become more interdisciplinary. We need more ethics, human rights, psychology, anthropology and safety aspects integrated into the curriculum. Most importantly, in companies where the “engineer is king” – those with social science backgrounds should be recognized for what they bring to human-centered design.

What makes you optimistic that we, as a society, will be able to improve social media?

There are some bright spots and innovations for good from the industry that give me hope, but these still are not as consistent and comprehensive as they should be. And they certainly are not universal. 

The pendulum is definitely swinging the other way, and the right kind of scrutiny is happening.  Self-regulation has been a failure; the quarter century of “intermediary immunity” has probably seen its day. I hope companies will well and truly start to embrace Safety by Design and take pride in making the world a better place whilst making a profit! I hope Governments do start to take pragmatic action – and that it isn’t all blunt force in approach. We’re all in this together, and our collective success or failure will be riding on these opportunities to harness the benefits of technology and minimise the risks.

Previous
Previous

Yaël Eisenstat to speak at upcoming Responsible Tech Summit in NYC on May 20th

Next
Next

50+ Resources for Aspiring Technology Policy Professionals