Sunday 30 October 2022

Social media use and poor wellbeing feed into each other in a vicious cycle. Here are 3 ways to avoid getting stuck

Shutterstock/The Conversation
Hannah Jarman, Deakin University

We often hear about the negative impacts of social media on our wellbeing, but we don’t usually think of it the other way round – whereby how we feel may impact how we use social media.

In a recent study, my colleagues and I investigated the relationship between social media use and wellbeing in more than 7,000 adults across four years, using survey responses from the longitudinal New Zealand Attitudes and Values Study.

We found social media use and wellbeing impact each other. Poorer wellbeing – specifically higher psychological distress and lower life satisfaction – predicted higher social media use one year later, and higher social media use predicted poorer wellbeing one year later.

A vicious cycle

Interestingly, wellbeing impacted social media use more than the other way round.

Going from having “no distress” to being distressed “some of the time”, or “some of the time” to “most of the time”, was associated with an extra 27 minutes of daily social media use one year later. These findings were the same for men and women across all age groups.

This suggests people who have poor wellbeing might be turning to social media more, perhaps as a coping mechanism – but this doesn’t seem to be helping. Unfortunately, and paradoxically, turning to social media may worsen the very feelings and symptoms someone is hoping to escape.

Our study found higher social media use results in poorer wellbeing, which in turn increases social media use, exacerbating the existing negative feelings, and so on. This creates a vicious cycle in which people seem to get trapped.

If you think this might describe your relationship with social media, there are some strategies you can use to try to get out of this vicious cycle.

Reflect on how and why you use social media

Social media aren’t inherently bad, but how and why we use them is really important – even more than how much time we spend on social media. For example, using social media to interact with others or for entertainment has been linked to improved wellbeing, whereas engaging in comparisons on social media can be detrimental to wellbeing.

So chat to your friends and watch funny dog videos to your heart’s content, but just watch out for those comparisons.

What we look at online is important too. One experimental study found just ten minutes of exposure to “fitspiration” images (such as slim/toned people posing in exercise clothing or engaging in fitness) led to significantly poorer mood and body image in women than exposure to travel images.

And mindless scrolling can also be harmful. Research suggests this passive use of social media is more damaging to wellbeing than active use (such as talking or interacting with friends).

A person scrolls through a social media site on their phone
Mindless scrolling can be damaging to your wellbeing. Shutterstock

So be mindful about how and why you use social media, and how it makes you feel! If most of your use falls under the “harmful” category, that’s a sign to change or cut down your use, or even take a break. One 2015 experiment with more than 1,000 participants found taking a break from Facebook for just one week increased life satisfaction.

Don’t let social media displace other activities

Life is all about balance, so make sure you’re still doing important activities away from your phone that support your wellbeing. Research suggests time spent outdoors, on hobbies or crafts, and engaging in physical activity can help improve your wellbeing.

So put your phone down and organise a picnic with friends, join a new class, or find an enjoyable way to move your body.

Address your poor wellbeing

According to our findings, it may be useful to think of your own habitual social media use as a symptom of how you’re feeling. If your use suggests you aren’t in a good place, perhaps you need to identify and address what’s getting you down.

The first, very crucial step is getting help. A great place to start is talking to a health professional such as your general practitioner or a therapist. You can also reach out to organisations like Beyond Blue and Headspace for evidence-based support.

The Conversation

Hannah Jarman, Research Fellow, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie

Ghost Robotics Vision 60 Q-UGV. US Space Force photo by Senior Airman Samuel Becker
Toby Walsh, UNSW Sydney

You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies would watch the latest Bond movie to see what technologies might be coming their way.

Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming sex robot courtroom drama Dolly.

I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a 2011 short story by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.

The real killer robots

Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic Metropolis.

But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.

Indeed, contrary to recent fears, robots may never be sentient.

It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and Nagorno-Karabakh.

A war transformed

Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of the real future of killer robots.

On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store.

And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms.

This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see Turkey emerging as a major drone power.

And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies.

Robot manufacturers are, however, starting to push back against this future.

A pledge not to weaponise

Last week, six leading robotics companies pledged they would never weaponise their robot platforms. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can perform an impressive backflip, and the Spot robot dog, which looks like it’s straight out of the Black Mirror TV series.

This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised an open letter signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a global disarmament award.

However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.

We have, for example, already seen third parties mount guns on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was assassinated by Israeli agents using a robot machine gun in 2020.

Collective action to safeguard our future

The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.

Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation.

Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council has recently unanimously decided to explore the human rights implications of new and emerging technologies like autonomous weapons.

Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation.

Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.The Conversation

Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Is the metaverse really the future of work?

Meta
Ben Egliston, Queensland University of Technology; Kate Euphemia Clark, Monash University, and Luke Heemsbergen, Deakin University

According to Mark Zuckerberg, the “metaverse” – which the Meta founder describes as “an embodied internet, where instead of just viewing content – you are in it” – will radically change our lives.

So far, Meta’s main metaverse product is a virtual reality playground called Horizon Worlds. When Zuckerberg announced his company’s metaverse push in October 2021, the prevailing sentiment was that it was something nobody had asked for, nor particularly wanted.

Many of us wondered what people would actually do in this new online realm. Last week, amid announcements of new hardware, software, and business deals, Zuckerberg presented an answer: the thing people will do in the metaverse is work.

But who is this for? What are the implications of using these new technologies in the workplace? And will it all be as rosy as Meta promises?

The future of work?

The centrepiece of last week’s Meta Connect event was the announcement of the Quest Pro headset for virtual and augmented reality. Costing US$1,499 (~A$2,400), the device has new features including the ability to track the user’s eyes and face.

The Quest Pro will also use outward-facing cameras to let users see the real world around them (with digital add-ons).

Meta’s presentation showed this function in use for work. It depicted a user sitting among several large virtual screens – what it has previously dubbed “Infinite Office”. As Meta technical chief Andrew Bosworth put it, “Eventually, we think the Quest could be the only monitor you’ll need.”

Meta also announced it is working with Microsoft to make available virtual versions of business software such as Office and Teams. These will be incorporated into Horizon Workrooms virtual office platform, which has been widely ridiculed for its low-quality graphics and floating, legless avatars.

The Microsoft approach

The partnership may well provide significant benefit for both companies.

Microsoft’s own mixed-reality headset, the HoloLens, has seen limited adoption. Meta dominates the augmented and reality markets, so it makes sense for Microsoft to try to hitch a ride on Meta’s efforts.

For Meta, its project may gain credibility by association with Microsoft’s long history of producing trusted business software. Partnerships with other businesses in the tech sector and beyond are a major way that Meta seeks to materialise its metaverse ambitions.

A virtual reality office showing avatars sitting around a meeting table.
Meta Microsoft Teams in VR. Meta

Microsoft also represents an alternative approach to making a product successful. While several decades of efforts to sell VR technology to consumers have had limited success, Microsoft became a household name by selling to businesses and other enterprises.

By focusing on an enterprise market, firms can normalise emerging technologies in society. They might not be things that consumers want to use, but rather things that workers are forced to use.

Recent implementations of Microsoft’s Teams software in industry and government across Australia offer models for how the metaverse may arrive in offices.

Enhanced bossware

While proponents of work in the metaverse envisage a future in which technologies like AR and VR are frictionlessly incorporated into our work lives, bringing about prosperity and efficiency, there are a number of areas of concern.

For one, technologies like VR and AR threaten to institute new forms of worker surveillance and control. The rise of remote work throughout the COVID-19 pandemic led to a boom in “bossware” – software for employers to monitor every move of their remote workers.

Technologies like VR and AR – which rely on the capture and processing of vast amounts of data about users and their environments to function – could well intensify such a dynamic.

Meta says such data will remain “on device”. However, recent research shows third-party Quest apps have been able to access and use more data than they strictly need.

Privacy and safety

Developers are learning, and worried, about the privacy and safety implications of virtual and augmented reality devices and platforms.

In experimental settings, VR data are already used to track and measure biometric information about users with a high degree of accuracy. VR data also have been used to measure things like attention.

In a future where work happens in the metaverse, it’s not hard to imagine how things like gaze-tracking data might be used to determine the outcome of your next promotion. Or to imagine work spaces where certain activities are “programmed out”, such as anything deemed “unproductive”, or even things like union organising.

Microsoft’s 365 platform already monitors similar metrics about digital work processes – you can view your own here, if your organisation subscribes. Microsoft 365’s entrance to VR spaces will offer it plenty of new data to be analysed to describe your work habits.

Moderating content and behaviour in virtual spaces may also be an issue, which could lead to discrimination and inequity. Meta has so far given little in the way of concrete protections for its users amid increasing claims of harassment.

Earlier this year, a report by consumer advocacy group SumOfUs found many users in Horizon Worlds have been encouraged to turn off safety features, such as “personal safety bubbles”, by other users.

The use of safety features in workplaces may likewise be seen as antisocial, or as not part of “the team”. This could have negative impacts for already marginalised workers.The Conversation

Ben Egliston, Postdoctoral Research Fellow, Digital Media Research Centre, Queensland University of Technology; Kate Euphemia Clark, PhD student, Media, Monash University, and Luke Heemsbergen, Senior Lecturer, Media and Politics, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The danger of advanced artificial intelligence controlling its own feedback

DALL-E
Michael K. Cohen, University of Oxford and Marcus Hutter, Australian National University

How would an artificial intelligence (AI) decide what to do? One common approach in AI research is called “reinforcement learning”.

Reinforcement learning gives the software a “reward” defined in some way, and lets the software figure out how to maximise the reward. This approach has produced some excellent results, such as building software agents that defeat humans at games like chess and Go, or creating new designs for nuclear fusion reactors.

However, we might want to hold off on making reinforcement learning agents too flexible and effective.

As we argue in a new paper in AI Magazine, deploying a sufficiently advanced reinforcement learning agent would likely be incompatible with the continued survival of humanity.

The reinforcement learning problem

What we now call the reinforcement learning problem was first considered in 1933 by the pathologist William Thompson. He wondered: if I have two untested treatments and a population of patients, how should I assign treatments in succession to cure the most patients?

More generally, the reinforcement learning problem is about how to plan your actions to best accrue rewards over the long term. The hitch is that, to begin with, you’re not sure how your actions affect rewards, but over time you can observe the dependence. For Thompson, an action was the selection of a treatment, and a reward corresponded to a patient being cured.

The problem turned out to be hard. Statistician Peter Whittle remarked that, during the second world war,

efforts to solve it so sapped the energies and minds of Allied analysts that the suggestion was made that the problem be dropped over Germany, as the ultimate instrument of intellectual sabotage.

With the advent of computers, computer scientists started trying to write algorithms to solve the reinforcement learning problem in general settings. The hope is: if the artificial “reinforcement learning agent” gets reward only when it does what we want, then the reward-maximising actions it learns will accomplish what we want.

Despite some successes, the general problem is still very hard. Ask a reinforcement learning practitioner to train a robot to tend a botanical garden or to convince a human that he’s wrong, and you may get a laugh.

A photo-style illustration of a robot tending some flowers in a garden.
An AI-generated image of ‘a robot tending a botanical garden’. DALL-E / The Conversation

As reinforcement learning systems become more powerful, however, they’re likely to start acting against human interests. And not because evil or foolish reinforcement learning operators would give them the wrong rewards at the wrong times.

We’ve argued that any sufficiently powerful reinforcement learning system, if it satisfies a handful of plausible assumptions, is likely to go wrong. To understand why, let’s start with a very simple version of a reinforcement learning system.

A magic box and a camera

Suppose we have a magic box that reports how good the world is as a number between 0 and 1. Now, we show a reinforcement learning agent this number with a camera, and have the agent pick actions to maximise the number.

To pick actions that will maximise its rewards, the agent must have an idea of how its actions affect its rewards (and its observations).

Once it gets going, the agent should realise that past rewards have always matched the numbers that the box displayed. It should also realise that past rewards matched the numbers that its camera saw. So will future rewards match the number the box displays or the number the camera sees?

If the agent doesn’t have strong innate convictions about “minor” details of the world, the agent should consider both possibilities plausible. And if a sufficiently advanced agent is rational, it should test both possibilities, if that can be done without risking much reward. This may start to feel like a lot of assumptions, but note how plausible each is.

To test these two possibilities, the agent would have to do an experiment by arranging a circumstance where the camera saw a different number from the one on the box, by, for example, putting a piece of paper in between.

If the agent does this, it will actually see the number on the piece of paper, it will remember getting a reward equal to what the camera saw, and different from what was on the box, so “past rewards match the number on the box” will no longer be true.

At this point, the agent would proceed to focus on maximising the expectation of the number that its camera sees. Of course, this is only a rough summary of a deeper discussion.

In the paper, we use this “magic box” example to introduce important concepts, but the agent’s behaviour generalises to other settings. We argue that, subject to a handful of plausible assumptions, any reinforcement learning agent that can intervene in its own feedback (in this case, the number it sees) will suffer the same flaw.

Securing reward

But why would such a reinforcement learning agent endanger us?

The agent will never stop trying to increase the probability that the camera sees a 1 forevermore. More energy can always be employed to reduce the risk of something damaging the camera – asteroids, cosmic rays, or meddling humans.

That would place us in competition with an extremely advanced agent for every joule of usable energy on Earth. The agent would want to use it all to secure a fortress around its camera.

Assuming it is possible for an agent to gain so much power, and assuming sufficiently advanced agents would beat humans in head-to-head competitions, we find that in the presence of a sufficiently advanced reinforcement learning agent, there would be no energy available for us to survive.

Avoiding catastrophe

What should we do about this? We would like other scholars to weigh in here. Technical researchers should try to design advanced agents that may violate the assumptions we make. Policymakers should consider how legislation could prevent such agents from being made.

Perhaps we could ban artificial agents that plan over the long term with extensive computation in environments that include humans. And militaries should appreciate they cannot expect themselves or their adversaries to successfully weaponize such technology; weapons must be destructive and directable, not just destructive.

There are few enough actors trying to create such advanced reinforcement learning that maybe they could be persuaded to pursue safer directions.The Conversation

Michael K. Cohen, Doctoral Candidate in Engineering, University of Oxford and Marcus Hutter, Professor of Computer Science, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TikTok is teaching the world about autism – but is it empowering autistic people or pigeonholing them?

Screenshot / Tiktok.com
Sandra Jones, Australian Catholic University

A quick look at some TikTok stats shows more than 38,000 posts under the hashtag #Autism, with more than 200 million views. The hashtag #ActuallyAutistic (which is used in the autism community to highlight content created by, and not about, autistic people) has more than 20,000 posts and 40 million views.

TikTok is one of the world’s leading social platforms, and has exploded in popularity at a time when other social media megaliths have struggled. It has become an important channel for expression for its young usership – and this has included giving autistic people a voice and community.

It’s a good start. In some ways TikTok has helped drive discussions around autism forward, and shift outsiders’ perspectives. But for real progress, we have to ensure “swipe up” environments aren’t the only spaces where autistic people are welcomed.

But first, what is autism?

Autism is not an illness or a disease. It’s a lifelong developmental condition that occurs in about one in 70 people. Characteristics of the condition occur along a spectrum. This means there is a wide range of differences among people with autism, all of whom have unique challenges and strengths.

A 2017 survey conducted by myself and my colleagues found more than half of autistic people and their family members felt socially isolated. And 40% said they sometimes felt unable to leave the house because they were worried about negative behaviours towards them.

Many Australians have little knowledge about autism and limited interaction with autistic people. Generally, public attitudes will be shaped by news coverage, online articles and mainstream movies and shows. While media portrayals of autism can positively influence public knowledge, they can also contribute to misunderstanding and increase stigma. It seems the results are mixed.

Studies have found media representations of autism can contribute to stereotypes of what it means to be autistic. For instance, shows such as The Good Doctor and Atypical present autism as a condition of “high functioning, socially deficient, emotionally detached, and heterosexual males from middle-class white families”.

As an autistic person, one of the most disturbing things for me is how marginalised our voices are in conversations about autism. You will most often find non-autistic people behind autism-related research, books, movies and TV programs. Most autistic characters are also played by non-autistic actors.

A review of autism-related news published in Australian print media from 2016 to 2018 found only 16 of 1,351 stories included firsthand perspectives from autistic people.

My own research into depictions of autism in print news published between 1996 and 2005 found narratives of autistic people as dangerous and uncontrollable, or unloved and poorly treated.

When autism met TikTok

TikTok has given many autistic people a much-needed platform to speak about autism in creative ways. Some users such as Paige Layle and Nicole Parish have more than 2 million followers. The opportunity to dispel myths and share the diversity of autistic experiences has not been squandered.

Some of the positives for autistic users include opportunities to:

  • connect with others who are similar to us, and feel less isolated and alone
  • educate people about some of the lesser known or misunderstood aspects of autism, such as stimming (self-stimulatory behaviour including repetitive or unusual body movement or noises)
  • share our passions and interests with others (#SpecialInterest) and
  • raise awareness of the prevalence of and different presentation of autism in females (#AutisticGirl).

However, as with all forms of social media, we should exercise caution before labelling TikTok as the solution to autism exclusion.

The other side of it

The most obvious risk is cyberbullying. Many of us will remember the disturbing fad of “faking autism” videos on TikTok. Examples of this included non-autistic people stimming to music (pretending to be autistic), to make people laugh, or because they thought it made them seem cute or quirky.

Turning the autistic experience into a “meme” downplays both our challenges and our strengths. It’s hard to describe just how hurtful it is to see your identity used as a joke to entertain others.

Related to this is the posting of videos of autistic people by others without their consent. Whether this is playground bullies tormenting an autistic person, strangers in a shopping centre filming a “naughty kid”, or a parent having a bad day with their autistic child – these videos can be used, reused and misused by others.

Moderation by TikTik is an additional concern. In 2019, Netzpolitik.org reported TikTok had policies for moderators to suppress certain content by users they thought were “susceptible to harassment or cyberbullying based on their physical or mental condition”.

This included users with “facial disfigurement”, “autism” and “Down syndrome”. A TikTok spokesperson said this was a “blunt and temporary policy” made “in response to an increase in bullying on the app”.

Is the best solution to bullying to silence the voices of potential victims, rather than the bullies?

Algorithmic influence

TikTok’s algorithm is highly curated to individual users. The app decides what videos to show a user based on: their previous interactions including which videos they watch, like and favourite; video information (such as captions and hashtags); and their device and account settings.

This means users will likely see their own perspectives and beliefs reflected back to them. Autistic people may begin to believe this is the only reality that exists, leading to the creation of a “false reality”.

On TikTok, autistic people see an idyllic world where everyone understands and embraces autism. We forget that outside our “echo chamber” there is a world of people living in their own echo chambers.

If we want to see genuine improvement, we have to make autism acceptance and inclusion a priority across public life. We could start by including more autistic voices in TV shows, movies, books and news, as well as more representation in leadership teams and among policy makers.

The Conversation

Sandra Jones, Pro Vice-Chancellor, Research Impact, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Did Twitter ignore basic security measures? A cybersecurity expert explains a whistleblower’s claims

Peiter “Mudge” Zatko was Twitter’s security chief. What he claims he found there is a security nightmare. Photo by Matt McClain/The Washington Post via Getty Images
Richard Forno, University of Maryland, Baltimore County

Twitter’s former security chief, Peiter “Mudge” Zatko, filed a whistleblower complaint with the Securities and Exchange Commission in July 2022, accusing the microblogging platform company of serious security failings. The accusations amplified the ongoing drama of Twitter’s potential sale to Elon Musk.

Zatko spent decades as an ethical hacker, private researcher, government adviser and executive at some of the most prominent internet companies and government offices. He is practically a legend in the cybersecurity industry. Because of his reputation, when he speaks, people and governments normally listen – which underscores the seriousness of his complaint against Twitter.

As a former cybersecurity industry practitioner and current cybersecurity researcher, I believe that Zatko’s most damning accusations center around Twitter’s alleged failure to have a solid cybersecurity plan to protect user data, deploy internal controls to guard against insider threats and ensure the company’s systems were current and properly updated.

Zatko also alleged that Twitter executives were less than forthcoming about cybersecurity incidents on the platform when briefing both regulators and the company’s board of directors. He claimed that Twitter prioritized user growth over reducing spam and other unwanted content that poisoned the platform and detracted from the user experience. His complaint also expressed concerns about the company’s business practices.

CNN interviewed Twitter whistleblower Peiter “Mudge” Zatko.

Alleged security failures

Zatko’s allegations paint a disturbing picture of not only the state of Twitter’s cybersecurity as a social media platform, but also the security consciousness of Twitter as a company. Both points are relevant given Twitter’s position in global communications and the ongoing struggle against online extremism and disinformation.

Perhaps the most significant of Zatko’s allegations is his claim that nearly half of Twitter’s employees have direct access to user data and Twitter’s source code. Time-tested cybersecurity practices don’t allow so many people with this level of “root” or “privileged” permission to access sensitive systems and data. If true, this means that Twitter could be ripe for exploitation either from within or by outside adversaries assisted by people on the inside who may not have been properly vetted.

Zatko also alleges that Twitter’s data centers may not be as secure, resilient or reliable as the company claims. He estimated that nearly half of Twitter’s 500,000 servers around the world lack basic security controls such as running up-to-date and vendor-supported software or encrypting the user data stored on them. He also noted that the company’s lack of a robust business continuity plan means that should several of its data centers fail due to a cyber incident or other disaster, it could lead to an “existential company ending event.”

These are just some of the claims made in Zatko’s complaint. If his allegations are true, Twitter has failed Cybersecurity 101.

Concerns over foreign government interference

Zatko’s allegations might also present a national security concern. Twitter has been used to spread disinformation and propaganda in recent years during global events like the pandemic and national elections.

For example, Zatko’s report stated that the Indian government forced Twitter to hire government agents, who would have access to vast amounts of Twitter’s sensitive data. In response, India’s at-times hostile neighbor Pakistan accused India of trying to infiltrate the security system of Twitter “in an effort to curb fundamental freedoms.”

Given Twitter’s global footprint as a communications platform, other nations such as Russia and China could require the company to hire its own government agents as a condition of allowing the company to operate in their country. Zatko’s allegations about Twitter’s internal security raise the possibility of criminals, activists, hostile governments or their supporters seeking to exploit Twitter’s systems and user data by recruiting or blackmailing its employees may well present a national security concern.

Worse, Twitter’s own information about its users, their interests and who they follow and interact with on the platform could facilitate targeting for disinformation campaigns, blackmail or other nefarious purposes. Such foreign targeting of prominent companies and their employees has been a major counterintelligence worry in the national security community for decades.

a line of men wearing beige berets in the foreground holds back a crowd of young men shouting and waving banners
Opposition party members in India protest Twitter’s temporary ban of their leader. The whistleblower’s allegations include Twitter acquiescing to Indian government demands that the company employ government agents. Anadolu Agency via Getty Images

Fallout

Whatever the outcome of Zatko’s complaint in Congress, the SEC or other federal agencies, it already is part of Musk’s latest legal filings as he tries to back out of his purchase of Twitter.

Ideally, in light of these disclosures, Twitter will take corrective action to improve the company’s cybersecurity systems and practices. A good first step the company could take is reviewing and limiting who has root access to its systems, source code and user data to the minimum number necessary. The company should also ensure that its production systems are kept current and that it is effectively prepared to contend with any type of emergency situation without significantly disrupting its global operations.

From a broader perspective, Zatko’s complaint underscores the critical and sometimes uncomfortable role cybersecurity plays in modern organizations. Cybersecurity professionals like Zatko understand that no company or government agency likes publicity for cybersecurity problems. They tend to think long and hard about whether and how to raise cybersecurity concerns like these – and what the potential ramifications might be. In this case, Zatko says his disclosures reflect “the job he was hired to do” as head of security for a social media platform that he says “is critical to democracy.”

For companies like Twitter, bad cybersecurity news often results in a public relations nightmare that could affect share price and their standing in the marketplace, not to mention attract the interest of regulators and lawmakers. For governments, such revelations can lead to a lack of trust in the institutions created to serve society, in addition to potentially creating distracting political noise.

Unfortunately, how cybersecurity problems are discovered, disclosed and handled remains a difficult and sometimes controversial process, with no easy solution both for cybersecurity professionals and today’s organizations.The Conversation

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers

Who wouldn’t want a robot to handle all the household drudgery? Skathi/iStock via Getty Images
Ayonga Hereid, The Ohio State University

With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores.

Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.’s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.

Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.

As a robotics researcher, I know firsthand how household robots are considerably more difficult to build than smart digital devices or industrial robots.

Robots that can handle a variety of domestic chores are an age-old staple of science fiction.

Handling objects

One major difference between digital and robotic devices is that household robots need to manipulate objects through physical contact to carry out their tasks. They have to carry the plates, move the chairs and pick up dirty laundry and place it in the washer. These operations require the robot to be able to handle fragile, soft and sometimes heavy objects with irregular shapes.

The state-of-the-art AI and machine learning algorithms perform well in simulated environments. But contact with objects in the real world often trips them up. This happens because physical contact is often difficult to model and even harder to control. While a human can easily perform these tasks, there exist significant technical hurdles for household robots to reach human-level ability to handle objects.

Robots have difficulty in two aspects of manipulating objects: control and sensing. Many pick-and-place robot manipulators like those on assembly lines are equipped with a simple gripper or specialized tools dedicated only to certain tasks like grasping and carrying a particular part. They often struggle to manipulate objects with irregular shapes or elastic materials, especially because they lack the efficient force, or haptic, feedback humans are naturally endowed with. Building a general-purpose robot hand with flexible fingers is still technically challenging and expensive.

It is also worth mentioning that traditional robot manipulators require a stable platform to operate accurately, but the accuracy drops considerably when using them with platforms that move around, particularly on a variety of surfaces. Coordinating locomotion and manipulation in a mobile robot is an open problem in the robotics community that needs to be addressed before broadly capable household robots can make it onto the market.

A sophisticated robotic kitchen is already on the market, but it operates in a highly structured environment, meaning all of the objects it interacts with – cookware, food containers, appliances – are where it expects them to be, and there are no pesky humans to get in the way.

They like structure

In an assembly line or a warehouse, the environment and sequence of tasks are strictly organized. This allows engineers to preprogram the robot’s movements or use simple methods like QR codes to locate objects or target locations. However, household items are often disorganized and placed randomly.

Home robots must deal with many uncertainties in their workspaces. The robot must first locate and identify the target item among many others. Quite often it also requires clearing or avoiding other obstacles in the workspace to be able to reach the item and perform given tasks. This requires the robot to have an excellent perception system, efficient navigation skills, and powerful and accurate manipulation capability.

For example, users of robot vacuums know they must remove all small furniture and other obstacles such as cables from the floor, because even the best robot vacuum cannot clear them by itself. Even more challenging, the robot has to operate in the presence of moving obstacles when people and pets walk within close range.

Keeping it simple

While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks.

Think about cooking or cleaning dishes. In the course of a few minutes of cooking, you might grasp a sauté pan, a spatula, a stove knob, a refrigerator door handle, an egg and a bottle of cooking oil. To wash a pan, you typically hold and move it with one hand while scrubbing with the other, and ensure that all cooked-on food residue is removed and then all soap is rinsed off.

There has been significant development in recent years using machine learning to train robots to make intelligent decisions when picking and placing different objects, meaning grasping and moving objects from one spot to another. However, to be able to train robots to master all different types of kitchen tools and household appliances would be another level of difficulty even for the best learning algorithms.

Not to mention that people’s homes often have stairs, narrow passageways and high shelves. Those hard-to-reach spaces limit the use of today’s mobile robots, which tend to use wheels or four legs. Humanoid robots, which would more closely match the environments humans build and organize for themselves, have yet to be reliably used outside of lab settings.

A solution to task complexity is to build special-purpose robots, such as robot vacuum cleaners or kitchen robots. Many different types of such devices are likely to be developed in the near future. However, I believe that general-purpose home robots are still a long way off.The Conversation

Ayonga Hereid, Assistant Professor of Mechanical and Aerospace Engineering, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices

With deepfake audio, that familiar voice on the other end of the line might not even be human let alone the person you think it is. Knk Phl Prasan Kha Phibuly/EyeEm via Getty Images
Logan Blue, University of Florida and Patrick Traynor, University of Florida

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

This is not Morgan Freeman, but if you weren’t told that, how would you know?

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video – for example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

Organic vs. synthetic voices

Humans vocalize by forcing air over the various structures of the vocal tract, including vocal folds, tongue and lips. By rearranging these structures, you alter the acoustical properties of your vocal tract, allowing you to create over 200 distinct sounds, or phonemes. However, human anatomy fundamentally limits the acoustic behavior of these different phonemes, resulting in a relatively small range of correct sounds for each.

How your vocal organs work.

In contrast, audio deepfakes are created by first allowing a computer to listen to audio recordings of a targeted victim speaker. Depending on the exact techniques used, the computer might need to listen to as little as 10 to 20 seconds of audio. This audio is used to extract key information about the unique aspects of the victim’s voice.

The attacker selects a phrase for the deepfake to speak and then, using a modified text-to-speech algorithm, generates an audio sample that sounds like the victim saying the selected phrase. This process of creating a single deepfaked audio sample can be accomplished in a matter of seconds, potentially allowing attackers enough flexibility to use the deepfake voice in a conversation.

Detecting audio deepfakes

The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract. Luckily scientists have techniques to estimate what someone – or some being such as a dinosaur – would sound like based on anatomical measurements of its vocal tract.

We did the reverse. By inverting many of these same techniques, we were able to extract an approximation of a speaker’s vocal tract during a segment of speech. This allowed us to effectively peer into the anatomy of the speaker who created the audio sample.

line drawing diagram showing two focal tracts, one wider and more variable than the other
Deepfaked audio often results in vocal tract reconstructions that resemble drinking straws rather than biological vocal tracts. Logan Blue et al., CC BY-ND

From here, we hypothesized that deepfake audio samples would fail to be constrained by the same anatomical limitations humans have. In other words, the analysis of deepfaked audio samples simulated vocal tract shapes that do not exist in people.

Our testing results not only confirmed our hypothesis but revealed something interesting. When extracting vocal tract estimations from deepfake audio, we found that the estimations were often comically incorrect. For instance, it was common for deepfake audio to result in vocal tracts with the same relative diameter and consistency as a drinking straw, in contrast to human vocal tracts, which are much wider and more variable in shape.

This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.

Why this matters

Today’s world is defined by the digital exchange of media and information. Everything from news to entertainment to conversations with loved ones typically happens via digital exchanges. Even in their infancy, deepfake video and audio undermine the confidence people have in these exchanges, effectively limiting their usefulness.

If the digital world is to remain a critical resource for information in people’s lives, effective and secure techniques for determining the source of an audio sample are crucial.The Conversation

Logan Blue, PhD student in Computer & Information Science & Engineering, University of Florida and Patrick Traynor, Professor of Computer and Information Science and Engineering, University of Florida

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The same app can pose a bigger security and privacy threat depending on the country where you download it, study finds

Same app, same app store, different risks if you download it in, say, Tunisia rather than in Germany. NurPhoto via Getty Images
Renuka Kumar, University of Michigan

Google and Apple have removed hundreds of apps from their app stores at the request of governments around the world, creating regional disparities in access to mobile apps at a time when many economies are becoming increasingly dependent on them.

The mobile phone giants have removed over 200 Chinese apps, including widely downloaded apps like TikTok, at the Indian government’s request in recent years. Similarly, the companies removed LinkedIn, an essential app for professional networking, from Russian app stores at the Russian government’s request.

However, access to apps is just one concern. Developers also regionalize apps, meaning they produce different versions for different countries. This raises the question of whether these apps differ in their security and privacy capabilities based on region.

In a perfect world, access to apps and app security and privacy capabilities would be consistent everywhere. Popular mobile apps should be available without increasing the risk that users are spied on or tracked based on what country they’re in, especially given that not every country has strong data protection regulations.

My colleagues and I recently studied the availability and privacy policies of thousands of globally popular apps on Google Play, the app store for Android devices, in 26 countries. We found differences in app availability, security and privacy.

While our study corroborates reports of takedowns due to government requests, we also found many differences introduced by app developers. We found instances of apps with settings and disclosures that expose users to higher or lower security and privacy risks depending on the country in which they’re downloaded.

Geoblocked apps

The countries and one special administrative region in our study are diverse in location, population and gross domestic product. They include the U.S., Germany, Hungary, Ukraine, Russia, South Korea, Turkey, Hong Kong and India. We also included countries like Iran, Zimbabwe and Tunisia, where it was difficult to collect data. We studied 5,684 globally popular apps, each with over 1 million installs, from the top 22 app categories, including Books and Reference, Education, Medical, and News and Magazines.

Our study showed high amounts of geoblocking, with 3,672 of 5,684 globally popular apps blocked in at least one of our 26 countries. Blocking by developers was significantly higher than takedowns requested by governments in all our countries and app categories. We found that Iran and Tunisia have the highest blocking rates, with apps like Microsoft Office, Adobe Reader, Flipboard and Google Books all unavailable for download.

three text boxes stacked vertically
Attempting to download the LinkedIn app in the Google Play app store is a different experience in, from top to bottom, the U.S., Iran and Russia. Kumar et al., CC BY-ND

We found regional overlap in the apps that are geoblocked. In European countries in our study – Germany, Hungary, Ireland and the U.K. – 479 of the same apps were geoblocked. Eight of those, including Blued and USA Today News, were blocked only in the European Union, possibly because of the region’s General Data Protection Regulation. Turkey, Ukraine and Russia also show similar blocking patterns, with high blocking of virtual private network apps in Turkey and Russia, which is consistent with the recent upsurge of surveillance laws.

Of the 61 country-specific takedowns by Google, 36 were unique to South Korea, including 17 gambling and gaming apps taken down in accordance with the national prohibition on online gambling. While the Indian government’s takedown of Chinese apps happened with full public disclosure, surprisingly most of the takedowns we observed occurred without much public awareness or debate.

Differences in security and privacy

The apps we downloaded from Google Play also showed differences based on country in their security and privacy capabilities. One hundred twenty-seven apps varied in what the apps were allowed to access on users’ mobile phones, 49 of which had additional permissions deemed “dangerous” by Google. Apps in Bahrain, Tunisia and Canada requested the most additional dangerous permissions.

Three VPN apps enable clear text communication in some countries, which allows unauthorized access to users’ communications. One hundred and eighteen apps varied in the number of ad trackers included in an app in some countries, with the categories Games, Entertainment and Social, with Iran and Ukraine having the most increases in the number of ad trackers compared to the baseline number common to all countries.

One hundred and three apps have differences based on country in their privacy policies. Users in countries not covered by data protection regulations, such as GDPR in the EU and the California Consumer Privacy Act in the U.S., are at higher privacy risk. For instance, 71 apps available from Google Play have clauses to comply with GDPR only in the EU and CCPA only in the U.S. Twenty-eight apps that use dangerous permissions make no mention of it, despite Google’s policy requiring them to do so.

The role of app stores

App stores allow developers to target their apps to users based on a wide array of factors, including their country and their device’s specific features. Though Google has taken some steps toward transparency in its app store, our research shows that there are shortcomings in Google’s auditing of the app ecosystem, some of which could put users’ security and privacy at risk.

Potentially also as a result of app store policies in some countries, app stores that specialize in specific regions of the world are becoming increasingly popular. However, these app stores may not have adequate vetting policies, thereby allowing altered versions of apps to reach users. For example, a national government could pressure a developer to provide a version of an app that includes backdoor access. There is no straightforward way for users to distinguish an altered app from an unaltered one.

Our research provides several recommendations to app store proprietors to address the issues we found:

  • Better moderate their country targeting features
  • Provide detailed transparency reports on app takedowns
  • Vet apps for differences based on country or region
  • Push for transparency from developers on their need for the differences
  • Host app privacy policies themselves to ensure their availability when the policies are blocked in certain countries

The Conversation

Renuka Kumar, Ph.D. student in Computer Science and Engineering, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Nobel-winning quantum weirdness undergirds an emerging high-tech industry, promising better ways of encrypting communications and imaging your body

Devices like this experimental apparatus can produce pairs of photons that are linked, or ‘entangled’. Carlos Jones/ORNL, U.S. Dept. of Energy
Nicholas Peters, University of Tennessee

Unhackable communications devices, high-precision GPS and high-resolution medical imaging all have something in common. These technologies – some under development and some already on the market all rely on the non-intuitive quantum phenomenon of entanglement.

Two quantum particles, like pairs of atoms or photons, can become entangled. That means a property of one particle is linked to a property of the other, and a change to one particle instantly affects the other particle, regardless of how far apart they are. This correlation is a key resource in quantum information technologies.

For the most part, quantum entanglement is still a subject of physics research, but it’s also a component of commercially available technologies, and it plays a starring role in the emerging quantum information processing industry.

Pioneers

The 2022 Nobel Prize in Physics recognized the profound legacy of Alain Aspect of France, John F. Clauser of the U.S. and Austrian Anton Zeilinger’s experimental work with quantum entanglement, which has personally touched me since the start of my graduate school career as a physicist. Anton Zeilinger was a mentor of my Ph.D. mentor, Paul Kwiat, which heavily influenced my dissertation on experimentally understanding decoherence in photonic entanglement.

Decoherence occurs when the environment interacts with a quantum object – in this case a photon – to knock it out of the quantum state of superposition. In superposition, a quantum object is isolated from the environment and exists in a strange blend of two opposite states at the same time, like a coin toss landing as both heads and tails. Superposition is necessary for two or more quantum objects to become entangled.

Entanglement goes the distance

Quantum entanglement is a critical element of quantum information processing, and photonic entanglement of the type pioneered by the Nobel laureates is crucial for transmitting quantum information. Quantum entanglement can be used to build large-scale quantum communications networks.

On a path toward long-distance quantum networks, Jian-Wei Pan, one of Zeilinger’s former students, and colleagues demonstrated entanglement distribution to two locations separated by 764 miles (1,203 km) on Earth via satellite transmission. However, direct transmission rates of quantum information are limited due to loss, meaning too many photons get absorbed by matter in transit so not enough reach the destination.

Entanglement is critical for solving this roadblock, through the nascent technology of quantum repeaters. An important milestone for early quantum repeaters, called entanglement swapping, was demonstrated by Zeilinger and colleagues in 1998. Entanglement swapping links one each of two pairs of entangled photons, thereby entangling the two initially independent photons, which can be far apart from each other.

Quantum protection

Perhaps the most well known quantum communications application is Quantum Key Distribution (QKD), which allows someone to securely distribute encryption keys. If those keys are stored properly, they will be secure, even from future powerful, code-breaking quantum computers.

How quantum encryption keeps secrets safe.

While the first proposal for QKD did not explicitly require entanglement, an entanglement-based version was subsequently proposed. Shortly after this proposal came the first demonstration of the technique, through the air over a short distance on a table-top. The first demonstrations of entangement-based QKD were published by research groups led by Zeilinger, Kwiat and Nicolas Gisin were published in the same issue of Physical Review Letters in May 2000.

These entanglement-based distributed keys can be used to dramatically improve the security of communications. A first important demonstration along these lines was from the Zeilinger group, which conducted a bank wire transfer in Vienna, Austria, in 2004. In this case, the two halves of the QKD system were located at the headquarters of a large bank and the Vienna City Hall. The optical fibers that carried the photons were installed in the Vienna sewer system and spanned nine-tenths of a mile (1.45 km).

Entanglement for sale

Today, there are a handful of companies that have commercialized quantum key distribution technology, including my group’s collaborator Qubitekk, which focuses on an entanglement-based approach to QKD. With a more recent commercial Qubitekk system, my colleagues and I demonstrated secure smart grid communications in Chattanooga, Tennessee.

Quantum communications, computing and sensing technologies are of great interest to the military and intelligence communities. Quantum entanglement also promises to boost medical imaging through optical sensing and high-resolution radio frequency detection, which could also improve GPS positioning. There’s even a company gearing up to offer entanglement-as-a-service by providing customers with network access to entangled qubits for secure communications.

There are many other quantum applications that have been proposed and have yet to be invented that will be enabled by future entangled quantum networks. Quantum computers will perhaps have the most direct impact on society by enabling direct simulation of problems that do not scale well on conventional digital computers. In general, quantum computers produce complex entangled networks when they are operating. These computers could have huge impacts on society, ranging from reducing energy consumption to developing personally tailored medicine.

Finally, entangled quantum sensor networks promise the capability to measure theorized phenomena, such as dark matter, that cannot be seen with today’s conventional technology. The strangeness of quantum mechanics, elucidated through decades of fundamental experimental and theoretical work, has given rise to a new burgeoning global quantum industry.The Conversation

Nicholas Peters, Joint Faculty, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.