Monday, 15 August 2022

How open-source technology can help nonprofits scale

Investing in customisable, open-source software solutions allows nonprofits to grow their programmes faster and at lower costs.

The proverb ‘necessity is the mother of invention’ rings true for India’s social sector now more than ever before. Unable to support their communities in person during the COVID-19 pandemic, nonprofits found new creative solutions to deliver programmes and connect with the people they serve. Almost overnight, digital technologies such as Zoom and WhatsApp became a mainstay of how the social sector works.

WhatsApp, in particular, was used by several nonprofits to send out relevant information and resources, collect feedback, and have conversations free of charge. The shift to WhatsApp was a relatively easy one since India has nearly 400 million active monthly users—the highest in the world. And so organisations didn’t have to spend time convincing people to adopt a new app. They were able to easily leverage WhatsApp to move ahead with their work, whether it was enabling digital learning or providing access to healthcare services.

However, despite the convenience of WhatsApp, it can prove cumbersome after a certain point. Let’s take the example of The Apprentice Project, a nonprofit running education programmes for schoolchildren. When the pandemic hit and schools shut, they could no longer deliver lesson plans within the classroom and had to completely overhaul how they ran their programme. After trying out multiple platforms such as Google Classroom and Edmodo with limited success, they finally turned to WhatsApp. They found that WhatsApp helped them reach a larger number of students across multiple geographies. Students were also more likely to respond as it was a system with which they were already somewhat familiar. However, they hit a few roadblocks fairly quickly: there is an upper limit on the maximum number of contacts in a group; only the person with the phone number can manage the account, and it doesn’t allow the organisation to collect structured data. For organisations like The Apprentice Project, which have hundreds of students within their network, manually managing WhatsApp groups proved to be neither scalable nor cost-effective or sustainable.

Nonprofits use Glific for different needs—from providing information to building long-term engagement with communities and creating behaviour change.

Recognising these communication challenges that nonprofits faced, Project Tech4Dev designed Glific, an open-source, WhatsApp-based communication platform that organisations can use for two-way communication with their communities. By automating large parts of the conversation, it allows nonprofits to communicate with thousands of people, while also reducing the burn on their resources. Additionally, Glific supports multiple Indian languages and can be integrated with other apps such as Google Sheets for data monitoring and analysis.

Today 32 nonprofits use Glific for different needs—from providing information to building long-term engagement with communities and creating behaviour change. For instance, MukkaMaar has been using it to train young girls about the physical and non-physical aspects of self-defence and development. Reap Benefit, on the other hand, has been using Glific to engage with citizens to solve societal problems. Another organisation, Slam Out Loud, sends art-based activities to children that they can complete at any time of the day, whenever they have digital access. 

Custom versus customised

When organisations do decide to invest resources into technology, they often look at building custom solutions from scratch, such as apps, websites, and platforms, in response to a very specific problem. For instance, an education organisation that trains teachers doesn’t need to build a custom tech solution to communicate with them and share resources and materials, especially when there are many other organisations that might employ a tech solution for the same use case. Moreover, a custom solution might work at a certain scale, but may not be sufficient at a much larger scale, or in a different context.

Given this, investing in highly specific tech solutions to problems faced by many is a strategy with many limitations. A more effective use of time and resources would be to build an open-source solution that solves a problem for many and can then be customised to the unique needs of each nonprofit or sector. Such solutions work out better in the long run for the organisation implementing them as well as for other nonprofits who might face similar problems.

A case for open-source technology

To build a more collaborative and impactful social sector, we need to build an ecosystem of open-source technology creators and users. When a software is open-source, it is publicly available and can be modified based on one’s needs, often for free or at a nominal cost. Using existing open-source software can allow nonprofits to grow their programmes faster, and at lower costs.

close up of indian woman using mobile phone in black and white-open source technology
Using open-source technology to scale communications can be transformative for both large and small nonprofits, allowing them to work more efficiently and effectively. | Picture courtesy: Rawpixel

Different nonprofits will use technology in different ways. Some might use it to send out information, others might want it to track progress and gather feedback, and so on. While all of these features might not be available in the basic version of a software, the advantage of open-source technology is that users can customise it to their requirements.

This is what happened in the case of Digital Green, a nonprofit working to empower smallholder farmers through technology. As part of their work with chilli farmers in Andhra Pradesh, they used a text-based chatbot on WhatsApp to send information about crop protection to farmers and answer any questions they might have. Soon they realised that typing responses was a barrier for many farmers. And so Glific worked with them to create a voice-based chatbot, which allowed farmers to record and send voice notes in their local language.

The reason we were able to quickly integrate a voice feature into the existing platform was that all the other necessary features were already present, making the process faster and less resource intensive. Not only did this feature benefit Digital Green, but it is also now available to any other nonprofit in the ecosystem that is looking for something similar. As the ecosystem develops and more nonprofits articulate their needs, additional features will continue to get added. Over time, as new customisations are added to the basic tool, the product evolves to encompass a range of features that can be used across organisations working on different issues. With each round of customisation, the cost and time taken to develop the product also keeps reducing, making the whole process self-propagating, scalable, and sustainable.

Had an organisation such as Digital Green decided to go to the market and build a custom software unique to their needs, it would have taken a lot more time and resources. Additionally, when thinking about customising open-source software versus building custom solutions, it is helpful to remember that building a new software from scratch requires creating a whole ecosystem of regular maintenance, updates, and management.

For nonprofits, it is no longer a question of online versus offline.

Using software as a service (SaaS) solutions such as Glific—where the nonprofit simply pays for the service and all the product design and maintenance is taken care of by the software organisation—usually comes with pre-built systems, which are much faster to deploy, both within the organisation and in programme implementation. For example, if a nonprofit decides to use WhatsApp, they can begin implementing their communication strategy within a few days, as opposed to if they had to learn about and get their communities to use an entirely new software tool.

For nonprofits, it is no longer a question of online versus offline. The pandemic has pushed them to think about integrating digital solutions into their operations. Harnessing technology allows organisations to reinforce their in-person communication at multiple points, using multiple channels. Programme delivery and engagement will be more effective and powerful by sending a WhatsApp message to follow up after a door-to-door visit.

Using open-source technology to scale communications can be transformative for both large and small nonprofits, allowing them to work more efficiently and effectively. For organisations that might be wary, intimidated by, or sceptical of technology, starting small is the way to go. It’s also not necessary to have a team of tech experts, especially if the nonprofit does not have the resources to invest in one. However, what is important is that nonprofits should be able to articulate their key issues, problems, needs, and goals clearly so that they can work with software vendors and developers to come up with solutions or customise existing tools that work for them. This will strengthen not only their work, but also the sector as a whole.

This is the fifth article in an 8-part series which seeks to build a knowledge base on using technology for social good.

Challenges with tech in midsize to large nonprofits

Nonprofits, even midsize to large ones, rarely have a seamless tech adoption journey. Here are some common issues that crop up and ways to tackle them.

Technology has been a key driver of efficiency in mainstream businesses. In the development domain too, sectors like agriculture, health, education, skilling, and climate change have demonstrated how tech is beneficial. And yet its adoption in the social sector has been much slower than in other industries. One would assume that at least midsize to large nonprofits have the resources, people, and systems and processes in place, all of which make tech easier to integrate. But the uptake remains sluggish.

At Dhwani RIS, we have seen that many midsize to large nonprofits continue to face challenges with technology adoption. For us, and for the context of this article, midsize to large nonprofits refer not only to an organisation’s budget size, but also to their scale of operations and scope of impact. These organisations that may or may not have large budgets, but they collaborate with state governments to deliver programmes.

Having observed the digitisation approach of more than 100 nonprofits, we have gained a deeper understanding of the underlying reasons influencing the uptake and sustained adoption of tech-enabled solutions. Some of the gaps we see are due to genuine capacity constraints, while others have to do with the leadership’s unwillingness to truly leverage tech and change management within organisations.

Digitisation challenges of midsize to large nonprofits

1. Tech-driven decision-making is lacking

This isn’t a well-developed muscle in the development sector. Organisations mostly work on expanding their monitoring and evaluation systems, for which they capture certain indicators. These are reported in an aggregated fashion, which entails high-level data entry, such as reporting on a quarterly basis, done to serve CSR reporting needs.

If data-driven decision-making is not a priority in the organisation, the technology to collect, analyse, and visualise such data may not find its way into the organisation’s operations.

But data isn’t leveraged quite as effectively for operational needs. The trouble arises when it comes to day-to-day reporting, where staff need to make decisions based on the patterns that emerge from the data. This varies across sectors. For instance, public health projects (such as on maternal and child health, communicable and non-communicable diseases, and sexual and reproductive health) fare slightly better in this regard. We see more and more nonprofits use mobile and web applications to profile patients at their doorsteps, get follow-up reminders, and track the targets achievements of health programmes. This could be because of the positive push for technology adoption by various government health departments. In sectors such as education or skilling, however, this kind of hands-on, objective, student-specific data is not easily recorded. Let’s say, for example, a skilling programme is divided into four phases: tabulating the number of students mobilised (phase 1), screening and enrolment (phase 2), placement in jobs (phase 3), and following up at a later date (phase 4). In many cases, either this data isn’t available in one place, or if it is available, the organisations—while making decisions—aren’t equipped to study the patterns it throws up.

If data-driven decision-making is not a priority in the organisation, the technology to collect, analyse, and visualise such data may not find its way into the organisation’s operations.

Here’s an example of a large foundation we worked with that was deploying a few crores of funding into their education programmes. They were working at scale, but they lacked a dedicated tech team to guide their digital processes. Given this, it took significant time and effort to roll out the tech solution effectively on the ground.

The primary challenge here was twofold. The first was creating the data framework and articulating what patterns they needed on their dashboards to take decisions. The second was to then operationalise their needs through a tech platform. In our experience, even if organisations can articulate their needs, they invariably struggle with developing tech to serve those needs—what workflows do they need on the mobile app? What are the scenarios they would want the app to cover? How should they think about user interface design? How will the different systems (mobile app and the dashboard) talk to each other? This is where the aptitude for technology is lacking, and it’s often visible across various levels of the organisation.

man using tech for data collection
We need a stronger tech ecosystem in our sector, and all stakeholders must play a role in building it | Picture courtesy: ©Bill & Melinda Gates Foundation / Frederic Courbet

2. Change management processes are missing

When people aren’t comfortable with the technology they’re expected to use, they might be resistant towards new solutions. This comes from a lack of training and change management processes in the organisation.

Change management is at all levels—leadership, middle management, and even field staff—where a lot of tech adoption needs to happen. For example, many public health programmes have ASHA or Anganwadi workers using government apps for data entry. Now, if any new programme asks them to switch to a new app, it becomes a resistance point for them, as it increases their workload. They can easily say that the app is not working when they are not able to deliver well. And the middle management or leadership teams are not able to cross-question them as they usually don’t have direct access to the app. Staff may also have misgivings about the purpose of such tech—often wondering if it’s meant to facilitate their work or monitor/replace them. Either they don’t know how to use the tech, don’t trust the apps, or don’t find them useful because they haven’t been adequately informed, or the tech solution has not been designed by engaging them. All these change management issues lead to lower adoption of tech in the programmes.

3. Leadership is often disinterested in tech

As with any new idea in organisations with a centralised decision-making approach, tech adoption depends on the inclination of the leadership team. If the leadership is averse to technology, we do not see its adoption in spite of the push from middle management or other employees.

Sometimes the leadership is not clear about what they want on the dashboard and merely tell the programme management team where they need to introduce tech. Both parties may not be motivated to deploy tech or use it. Further, not everyone in the leadership is equipped to make data-driven decisions.

This tech orientation in our experience is more likely to be found within CSR teams, philanthropies, or large nonprofits where the leadership comes from a corporate or tech background and understands data-driven decision-making. So even if these senior team members aren’t part of day-to-day discussions and decision-making, they have at least mandated that their organisation or programme be digitised to the extent possible. This ensures efficiency and no time is wasted in converting paper data into Excel data and so on.

Sometimes there is a genuine lack of tech exposure among an organisation’s leadership.

The problem isn’t always about disinterest in tech. Sometimes there is a genuine lack of tech exposure among an organisation’s leadership. But because of this, the potential benefits of digitisation go unappreciated, leaving organisations with a hands-off approach from the team members. The result: half-hearted or failed tech adoption.

4. Donors don’t incentivise tech enough, or effectively

It’s not uncommon to find nonprofit organisations that adapt to the mandate of their donors. Since most grants do not prioritise or incentivise technology adoption as part of programme implementation, the same priorities tend to trickle down into the decision-making of nonprofits as well. Therefore, the push for tech needs to come from funders. Often we see that nonprofit leadership is disinterested in or not aligned with deploying tech when the funding they get is programme-specific. And if the programme does not have the budget for tech, which is usually the case, they find it hard to provision a budget for technology.

Another crucial aspect is ensuring that tech isn’t incorporated for the sake of it—which also tends to happen when funding is for a specific tech solution. When budgeting for tech, it’s important to factor in the next level of detailing so that the budget reflects a fairly accurate assessment of costs. For example, one of Dhwani’s clients hit a roadblock in a school education counselling programme for which they’d been backed by a large donor. Neither party knew how much budget to provision for the initiative. When it was time to start thinking about tech implementation after all the grant formalities had been completed, they realised that they had under-budgeted. Going back from such a point and revisiting the budgets for approved funding can be a challenge. Hence, it may be better to involve the tech partner earlier on to have a clearer sense of the capital requirement right from the start.

How we can change things

In our view, the following initiatives at relevant stages by donors and nonprofits will bring a positive change towards tech adoption in social sector organisations:

Donors

  1. Instate more fellowships for social sector tech enthusiasts; this can be a catalyst in generating interest in data and technology.
  2. Apportion grants to include tech spends, and be specific about what is being built, and why and how it will help the nonprofit in solving a problem. Be clear about expectations from the get-go so that both partners are on the same page.

Nonprofits

  1. Invest time, money, and effort in technology-and data-related capacity building of leadership teams, project managers, and field teams.
  2. Build change management templates that can be easily followed while transitioning to new tech solutions.
  3. Shift the focus from technology alone to instead talk about how tech, and the data generated through it, can solve specific problems (for example, reducing the effort of managing a huge beneficiary database; improving localised alerts for farmers to manage adverse climate conditions; and increasing student outreach through EdTech solutions).
  4. Identify early adopters of a tech solution across the ranks of an organisation, and take their help in ensuring further uptake.
  5. Build flexible and reusable solutions instead of short-term use-and-throw ones.
  6. Adopt digitisation at the right stage—nascent organisations where processes are still evolving should first stabilise and then go for tech innovations, which often tend to be long-term. Moreover, they must communicate their plans and goals with the tech providers at the contract stage so that the development and use of tech remains smooth.

Lastly, at an ecosystem level, we need more social sector tech-focused communities and collaboratives, where leaders and key decision makers are given a platform to share their thoughts and experiences for the benefit of a larger audience. We need a stronger tech ecosystem in our sector, and all stakeholders must play a role in building it.

How to build an effective data dashboard

Good monitoring dashboards help with data analysis and programme implementation in the social sector. Here are seven things to keep in mind when building them.

Monitoring dashboards inform crucial decisions for improving program implementation in the social sector. But building a dashboard is not as simple as throwing together some charts of your indicators; there is a structured approach to it. This blog post offers 7 pieces of wisdom to help you build great dashboards.

1. Get clear about why you need a dashboard

More than just a visual display of data, dashboards are tools that drive key decisions. If you are planning to build a dashboard, ask yourself: Who are the users? What decisions do they need to make, and how often? The decision-making process is user-specific, and thus a dashboard should be designed with only a limited set of users in consideration. 

When you are convinced that there exists a set of indicators that need regular monitoring and the decision-makers have an appetite for the same, convince the stakeholders, including yourself, that a dashboard is what’s really needed. For example, the need to monitor frontline worker activity is a great dashboard use-case for a healthcare non-profit, but a live dashboard to track internal budget is overkill if budgeting happens just once a year.

It’s worth being a devil’s advocate from the beginning to assess how your dashboard might fail to do a good job. For example, consider constraints regarding device access, preference, internet connectivity, and language. The last thing you want is to build a cool piece of software that no one would use.

Throughout the process, ask yourself, “how much value is this adding over a spreadsheet?”

2. Don’t skip the homework

Part 1: Understand the users

Start with in-depth research about your dashboard’s target users. What activities do they engage in? What decisions do they get stuck at? Answers to these would inform you about the user’s value system – what they care about the most and what they don’t care about at all. Additionally, this research would inform you about the granularity of data that you would need to create visualizations.

Also, note the number and types of users that require a monitoring system. It’s hard to optimize a single dashboard design for different decision-makers, especially if they are at different levels in the organizational hierarchy. If you discover a huge variation within your target user group, you may need to target more specifically. Alternatively, you could keep multiple landing screens for each user type or, in some cases build multiple dashboards.

Know and be okay with the fact that you can’t give everyone everything at once.

Charting a user-flow will help you decide what features are important.

3. Prioritize what needs to be visualized

A dashboard’s real estate is limited, as is the user’s attention span. Putting every possible indicator will render the dashboard too cluttered and perhaps too generic to be useful. You need to think well about indicator priority before jumping into the development process.

Keep in mind, not all indicators are that important.

While you’ll discover indicators of primary concern for the users from your conversations, they could be different from those that reflect if the program being monitored is achieving desired outcomes. In that sense, the indicator selection process is a bit tricky. What will help you is a prioritization framework firmly rooted in the organization’s theories of change while addressing the monitoring needs of the decision-makers.

This process is crucial as it helps constrain the amount of information relevant to the dashboard. Skipping this will lead to a lot of spontaneous iterations later in the development stages as you and your users continually discover new and important indicators.

Keep in mind, not all indicators are that important.

a tablet showing a dashboard, some papers with graphs and a pen - data dashboard
Design and engineering go hand-in-hand. | Picture courtesy: Pixabay

4. Don’t skip the homework

Part 2: Understand dashboard tools

The quickest way to make great dashboards is to use drag-and-drop tools like Tableau and Google Data Studio or code-driven tools like R Shiny

All these offer an inevitable tradeoff between cost, learning curve, and range of possible features. To settle upon one, you have to consider many other factors in the project–the developer team’s capability, shipment urgency, feature requests, and post-delivery maintainability. We at IDinsight put special emphasis on cost and maintainability because many of our clients aren’t equipped to pay hefty licensing fees or manage a team of developers.

If you decide to build a dashboard in-house using any of these tools, first try to get a good sense of the features that the tool can and cannot offer. Your users’ experience with the dashboard would be intricately linked to the technical features that can be incorporated into it.

Design and engineering go hand-in-hand.

5. Go visual!

With knowledge of the dashboarding tool’s potential to fulfill user needs, you should be at a stage to think about the interface design. The interface should deliver an experience that strongly informs the decisions the user needs to make. For reference, you could review existing public dashboards similar to your use case or pick inspiration from websites like Dribbble and other tool-specific galleries.1 It will also be helpful to brush up on best practices around data visualization.2

It is recommended to use wireframing tools for your first draft. 

If there are equivalent indicator buckets within the dashboard’s scope, like for various departments within a government, you could dedicate a separate page for each bucket. It is best to start the flow with a high-level overview followed by deep dives into various dimensions of the program being monitored (across time, geography, or other units of interest). Referring to the indicator prioritization framework, you should narrow down the types of charts you’d require, their X and Y axes, and what filters they would be responsive to. 

It is recommended to use wireframing tools for your first draft. MiroDraw, PowerPoint, or sometimes just pen and paper sketches are great for communicating ideas that are difficult to present verbally.

Visual ideas deserve visual representation.

Example of a low-fidelity design mockup for a dashboard screen.

6. Embrace the crash test

Be quick to present the first draft to your users and solicit feedback, either as a wireframe or a scrappy prototype. Talk about why you made certain design choices and seek the user’s input on usability. The intention is to test your assumptions and check for key requirements you might have missed in your first draft.

It is important that you don’t start building data pipelines right away.

Also, keep examples of similar dashboards handy in case you need to showcase other features or design options. Note that the user might raise new feature requests, but since not all features would be high priority or feasible, try to address the user’s concerns behind the requests and not the requests themselves.

It is important that you don’t start building data pipelines right away. Use dummy data until the user approves the first draft because it’s completely possible that they suggest some heavy changes. With this first set of inputs, you may need to reassess the project timeline, rebuild expectations and possibly revisit the drawing board.

You can’t fall in love with your dashboard just yet.

7. Iterate enough, iterate a lot…

With a first draft approved, you can safely move away from the wireframe and start using the shortlisted dashboard tool. For radically different ideas that occur to you or your users, later on, use wireframe sketches again. But make sure you are also well-versed with the advanced features of your dashboard tool because not all features can be captured in sketches. Concurrently, you should start engineering the data pipelines at this stage.

However, keep in mind that priorities shift all the time in software development. It takes time to assess if the final output is delivering the value originally expected. The good thing, though, is that you need not finalize every detail at once – you can focus on the big boulders first (the non-negotiable asks) and pebbles later on (the bells and whistles). And this can be easily managed over multiple iterations of development and feedback. So make sure to pay your users a visit at regular intervals.

You not only make dashboards for your users but also with them.

Conclusion

Dashboard development, just like regular software development, is not straightforward. But it is manageable through an iterative development process. Your dashboard tools’ feature limitations are the only hard constraints you need to remember. From there, it’s just about addressing user needs and relieving their pain points in the best way possible.

This article was originally published on IDinsight.

Digital doubles: In the future, virtual versions of ourselves could predict our behaviour

The data we generate online and using apps could be used to inform a digital version of ourselves. (Shutterstock)
Jordan Richard Schoenherr, Concordia University

A digital twin is a copy of a person, product or process that is created using data. This might sound like science fiction, but some have claimed that you will likely have a digital double within the next decade. As a copy of a person, a digital twin would — ideally — make the same decisions that you would make if you were presented with the same materials.

This might seem like yet another speculative claim by futurists. But it is much more possible than people might like to believe. While we might tend to assume that we are special and unique, with a sufficient amount of information, artificial intelligence (AI) can make many inferences about our personalities, social behaviour and purchasing decisions.

The era of big data means that vast quantities of information (called “data lakes”) are collected about your overt attitudes and preferences as well as behavioural traces that you leave behind.

Equally jarring is the extent to which organizations collect our data. In 2019, the Walt Disney Company acquired Hulu, a company that journalists and advocates pointed out had a questionable record when it came to data collection. Seemingly benign phone applications — like ones used for ordering coffee — can collect vast quantities of from users every few minutes.

The Cambridge Analytica scandal illustrates these concerns, with users and regulators concerned about the prospects of someone being able to identify, predict and shift their behaviour.

But how concerned should we be?

High vs. low fidelity

In simulation studies, fidelity refers to how closely a copy, or model, corresponds to its target. Simulator fidelity refers to the degree of realism a simulation has to real-world references. For example, a racing video game provides an image that increases and decreases in speed when we depress keys on a keyboard or controller. Whereas a driving simulator might have a windscreen, chassis, gear stick and gas and brake pedals, a video game has a lower degree of fidelity than the driving simulator.

A digital twin requires a high degree of fidelity that would be able to incorporate real-time, real-world information: if it is raining outside now, it would be raining in the simulator.

In industry, digital twins can have radical implications. If we are able to model a system of humans and machine interaction, we have the ability to allocate resources, anticipate shortages and breakdowns, and make projections.

A human digital twin would incorporate a vast quantity of data about a person’s preferences, biases and behaviours, and be able to have information about a user’s immediate physical and social environment to make predictions.

These requirements mean that achieving a true digital twin are a remote possibility for the near future. The amount of sensors required to accumulate the data and process capacity necessary to maintain a virtual model of the user would be vast. In the present, developers settle for a low-fidelity model.

Ethical issues

Producing a digital twin raises social and ethical issues concerning data integrity, a model’s prediction accuracy, the surveillance capacities required to create and update a digital twin, and ownership and access to a digital twin.

British Prime Minister Benjamin Disraeli is frequently quoted as saying, “There are three kinds of lies: lies, damned lies and statistics,” implying that numbers cannot be trusted. The data collected about us relies on gathering and analyzing statistics about our behaviours and habits to make predictions about how we would behave in given situations.

This sentiment reflects a misunderstanding about how statisticians gather and interpret data, but it does raise an important concern.

One of the most important ethical issues with a digital twin relates to the quantitative fallacy, which assumes that numbers have an objective meaning divorced from their context. When we look at numbers, we often forget that they have specific meanings that come from the measurement instruments used to collect them. And a measurement instrument might work in one context but not another.

When collecting and using data, we must acknowledge that the selection includes certain features and not others. Often, this selection is done out of convenience or due to the practical limitations of technology.

a floating human face being made out of cubes
Data used to generate digital profiles is often selective and removed from its context. (Shutterstock)

We must be critical of any claims based on data and artificial intelligence because the design decisions are not available to us. We must understand how the data were collected, processed, used and presented.

Power imbalances

The imbalance of power is a growing discussion in the public concerning, data, privacy and surveillance. At smaller scales, this can produce or increase digital divides — the gap between those who do and those who do not have access to digital technologies. At larger scales, this threatens a new colonialism premised on access to and control of information and technology.

Even the creation of low-fidelity digital twins provides opportunities to monitor users, make inferences about their behaviour, attempt to influence them, and represent them to others.

While this can help in health-care or education settings, a failure to give users the ability to access and assess their data can threat individual autonomy and the collective good of society.

Data subjects do not have access to the same resources as large corporations and governments. The lack the time, training, and perhaps the motivation. There is a need for consistent and independent oversight to ensure that our digital rights are preserved.

Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

So this is how it feels when the robots come for your job: what GitHub’s Copilot ‘AI assistant’ means for coders

Shutterstock
Ben Swift, Australian National University

I love writing code to make things: apps, websites, charts, even music. It’s a skill I’ve worked hard at for more than 20 years.

So I must confess last week’s news about the release of a new “AI assistant” coding helper called GitHub Copilot gave me complicated feelings.

Copilot, which spits out code to order based on “plain English” descriptions, is a remarkable tool. But is it about to put coders like me out of a job?

Trained on billions of lines of human code

GitHub (now owned by Microsoft) is a collaboration platform and social network for coders. You can think of it as something like a cross between Dropbox and Instagram, used by everyone from individual hobbyists through to highly paid software engineers at big tech companies.

Over the past decade or so, GitHub’s users have uploaded tens of billions of lines of code for more than 200 million apps. That’s a lot of ifs and fors and print("hello world") statements.

The Copilot AI works like many other machine learning tools: it was “trained” by scanning through and looking for patterns in those tens of billions of lines of code written and uploaded by members of GitHub’s coder community.

A screenshot of computer code produced by Copilot.
Copilot produces code from instructions in plain English (the pale blue text). GitHub

The training can take many months, hundreds of millions of dollars in computing equipment, and enough electricity to run a house for a decade. Once it’s done, though, human coders can then write a description (in plain English) of what they want their code to do, and the Copilot AI helper will write the code for them.

Based on the Codex “language model”, Copilot is the next step in a long line of “intelligent auto-completion” tools. However, these have been far more limited in the past. Copilot is a significant improvement.

A startlingly effective assistant

I was given early “preview” access to Copilot about a year ago, and I’ve been using it on and off. It takes some practice to learn exactly how to frame your requests in English so the Copilot AI gives the most useful code output, but it can be startlingly effective.

However, we’re still a long way from “Hey Siri, make me a million dollar iPhone app”. It’s still necessary to use my software design skills to figure out what the different bits of code should do in my app.

To understand the level Copilot is working at, imagine writing an essay. You can’t just throw the essay question at it and expect it to produce a useful, well-argued piece. But if you figure out the argument and maybe write the topic sentence for each paragraph, it will often do a pretty good job at filling in the rest of each paragraph automatically.

Depending on the type of coding I’m doing, this can sometimes be a huge time- and brainpower-saver.

Biases and bugs

There are some open questions with these sorts of AI coding helper tools. I’m a bit worried they’ll introduce, and reinforce, winner-takes-all dynamics: very few companies have the data (in this case, the billions of lines of code) to build tools like this, so creating a competitor to Copilot will be challenging.

And will Copilot itself be able to suggest new and better ways to write code and build software? We have seen AI systems innovate before. On the other hand, Copilot may be limited to doing things the way we’ve always done them, as AI systems trained on past data are prone to do.

My experiences with Copilot have also made me very aware my expertise is still needed, to check the “suggested” code is actually what I’m looking for.

Sometimes it’s trivial to see that Copilot has misunderstood my input. Those are the easy cases, and the tool makes it easy to ask for a different suggestion.

The trickier cases are where the code looks right, but it may contain a subtle bug. The bug might be because this AI code generation stuff is hard, or it might be because the billions of lines of human-written code that Copilot was trained on contained bugs of their own.

Another concern is potential issues about licensing and ownership of the code Copilot was trained on. GitHub has said it is trying to address these issues, but we will have to wait and see how it turns out.

More output from the same input

At times, using Copilot has made me feel a little wistful. The skill I often think makes me at least a little bit special (my ability to write code and make things with computers) may be in the process of being “automated away”, like many other jobs have been at different times in human history.

However, I’m not selling my laptop and running off to live a simple life in the bush just yet. The human coder is still a crucial part of the system, but as curator rather than creator.

Of course, you may be thinking “that’s what a coder would say” … and you may be right.

AI tools like Copilot, OpenAI’s text generator GPT-3, and Google’s Imagen text-to-image engine, have seen huge improvements in the past few years.

Many in white-collar “creative industries” which deal in text and images are starting to wrestle with their fears of being (at least partially) automated away. Copilot shows some of us in the tech industry are in the same boat.

Still, I’m (cautiously) excited. Copilot is a force multiplier in the most optimistic tool-building tradition: it provides more leverage, to increase the useful output for the same amount of input.

These new tools and the new leverage they provide are embedded in wider systems of people, technology and environmental actors, and I’m really fascinated to see how these systems reconfigure themselves in response.

In the meantime, it might help save my brain juice for the hard parts of my coding work, which can only be a good thing.

Ben Swift, Educational Experiences team lead (Senior Lecturer), ANU School of Cybernetics, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, 6 August 2022

It’s 2022. Why do we still not have waterproof phones?

Shutterstock
Ritesh Chugh, CQUniversity Australia

While manufacturers have successfully increased the water-repelling nature of smartphones, they are still far from “waterproof”. A water-resistant product can usually resist water penetration to some extent, but a waterproof product is (meant to be) totally impervious to water.

Last week, Samsung Australia was fined A$14 million by the Australian Federal Court over false representations in ads of the water resistance of its Galaxy phones. The tech giant admitted that submerging Galaxy phones in pool or sea water could corrode the charging ports and stop the phones from working, if charged while still wet.

Similarly, in 2020, Apple was fined €10 million (about A$15.3 million) in Italy for misleading claims about the water resistance of iPhones.

It’s very common for phones to become damaged as a result of being dropped in water. In a 2018 survey in the US, 39% of respondents said they’d dropped their phones in water. Other surveys have had similar results.

So why is it in 2022 – a time where technological marvels surround us – we still don’t have waterproof phones?

Waterproof vs water-resistant

There’s a rating system used to measure devices’ resistance against solids (such as dust) and liquids (namely water). It’s called the Ingress Protection (IP) rating.

An IP rating will have two numbers. In a rating of IP68, the 6 refers to protection against solids on a scale of 0 (no protection) to 6 (high protection), and 8 refers to protection against water on a scale of 0 (no protection) to 9 (high protection).

Chart showing International Electrotechnical Commission's IP Ratings Guide
The International Electrotechnical Commission is the body behind the IP ratings guide. International Electrotechnical Commission

Interestingly, the benchmark for the water-resistance rating varies between manufacturers. For example, Samsung’s IP68-certified phones are water-resistant to a maximum depth of 1.5m in freshwater for up to 30 minutes, and the company cautions against beach or pool use. Some of Apple’s iPhones with an IP68 rating can be used at a maximum depth of 6m for up to 30 minutes.

Yet both Samsung and Apple are unlikely to consider repairing your water-damaged phone under their warranties.

Moreover, IP rating testing is done under controlled laboratory conditions. In real-life scenarios such as boating, swimming or snorkelling, factors including speed, movement, water pressure and alkalinity all vary. So, gauging a phone’s level of water resistance becomes complicated.

How are phones made water-resistant?

Making a phone water-resistant requires several components and techniques. Typically, the first point of protection is to form a physical barrier around all ingress (entry) points where dust or water could enter. These include the buttons and switches, speakers and microphone outlets, the camera, flash, screen, phone enclosure, USB port and SIM card tray.

These points are covered and sealed using glue, adhesive strips and tapes, silicone seals, rubber rings, gaskets, plastic and metal meshes and water-resistant membranes. After this, a layer of ultra-thin polymer nanocoating is applied to the phone’s circuit board to help repel water.

Nevertheless, a phone’s water resistance will still decrease with time as components age and deteriorate. Apple admits water- and dust-resistance are not permanent features of its phones.

Phone gets flushed in a toilet bowl
Many people drop their phones down the toilet – be careful! Shutterstock

Cameras are not entirely impervious to water, but some can tolerate submersion a lot better than smartphones. Often that’s because they’re relatively simpler devices.

A smartphone has much more functionality, which means internal components are more sensitive, fragile, and must be built into a smaller casing. All of these factors make it doubly difficult to afford phones a similar level of water resistance.

Adding water resistance to phones also increases their price for consumers (by 20% to 30%, according to Xiaomi’s co-founder). This is a major consideration for manufacturers – especially since even a small crack can render any waterproofing void.

Keeping devices dry

Apart from nanocoating on the internal circuit boards, applying water-repellent coating to the exterior of a phone could boost protection. Some companies are working on this technology for manufacturers.

Future phones might also have circuitry that’s fabricated directly onto (waterproof) silicone material using laser writing techniques, and further coated with water-repellant technologies.

For now, however, there’s no such thing as a waterproof phone. If your phone does find itself at the bottom of a pool or toilet and isn’t turning on, make sure you take the best steps to ensure it dries out properly (and isn’t further damaged).

You can also buy a waterproof case or dry pouch if you want to completely waterproof your phone for water activities.

Ritesh Chugh, Associate Professor - Information and Communications Technology, CQUniversity Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Concerns over TikTok feeding user data to Beijing are back – and there’s good evidence to support them

amrothman from Pixabay, CC BY-SA
David Tuffley, Griffith University

When English statesman Sir Francis Bacon famously said “knowledge is power”, he could hardly have foreseen the rise of ubiquitous social media some 500 years later.

Yet social media platforms are some of the world’s most powerful businesses – not least because they can collect massive amounts of user data, and use algorithms to turn the data into actionable knowledge.

Today, TikTok has some of the best algorithms in the business, and a suite of data-collection mechanisms.

This is how it manages to be so addictive, with some 1.2 billion users as of December 2021. This number is expected to rise to 1.8 billion by the end of the year.

It’s against the background of these huge numbers that the US Federal Communications Commission (FCC) wrote a strongly worded letter to the chief executives of Apple and Google last Tuesday, urging them to remove TikTok from their app stores on the grounds that the company – or more precisely its Chinese parent ByteDance – can’t be trusted with US users’ data.

What are the concerns?

In his letter, FCC commissioner Brendan Carr says:

TikTok is owned by Beijing-based ByteDance — an organisation that is beholden to the Communist Party of China and required by the Chinese law to comply with the PCR’s [(People’s Republic of China)] surveillance demands.

TikTok’s privacy policy says it won’t sell personal information to third parties, but reserves the right to use information internally for business development purposes. That internal use may include use by its parent company, ByteDance.

TikTok US has repeatedly denied breaching US data privacy regulations. It says user data are stored on US servers and not shared with ByteDance. But Carr says these measures fall short of guaranteeing the privacy of US users:

TikTok’s statement that ‘100% of US user traffic is being routed to Oracle’ (in the US) says nothing about where that data can be accessed from.

Following robust questioning by US senators, TikTok has admitted its US-stored data are in fact accessible from China, subject to unspecified security protocols at the US end.

Australian users also have their data stored on US servers, with backups in Singapore. But it’s not known whether these data – which could include users’ browsing habits, images, biographical information and location – are subject to the same safeguards as the US data.

Leaked audio

The unusually blunt language from Carr may have been occasioned by leaked audio obtained by Buzzfeed from more than 80 internal TikTok meetings.

According to a Buzzfeed report from mid-June, China-based employees of ByteDance have repeatedly accessed non-public data about US TikTok users. The tapes overwhelmingly contradict TikTok’s earlier data privacy assurances.

For example, in a September 2021 meeting a senior US-based TikTok manager referred to a Beijing-based engineer as a “master admin” who “has access to everything”. That same month a US-based staffer in the Trust and Safety Department was heard saying “everything is seen in China”.

In short, the recordings corroborate the claim that China-based employees have often accessed US data, and more recently than earlier statements asserted.

Might it all be harmless?

On the one hand TikTok is in the business of entertaining users, with a goal to keep them on the platform and expose them to targeted advertising. On the other hand, TikTok can be used to spread misinformation and influence users to their detriment.

It has been shown to host COVID conspiracy theories and other medical misinformation, and was reportedly used with a goal to influence Kenya’s general elections coming up in August.

Seen in this weaponized context, the US government’s strenuous objections to TikTok come into clearer focus.

Moreover, past events have also raised good reason to suspect Chinese actors of mass data harvesting online.

In 2020, Australian media outlets reported on a data leak from Zhenhua Data, a Chinese company with clients including the Chinese government and the People’s Liberation Army.

The leak was said to contain data on more than 35,000 Australians – including dates of birth, addresses, marital status, photographs, political associations, relatives and social media accounts. This information was gathered from a range of sources, including TikTok.

Would banning TikTok be effective?

Removing TikTok from Google’s and Apple’s app stores can only be done on a country-by-country basis. India banned the platform in June 2020.

If the Australian government were to make the TikTok domain inaccessible from Australia, it could still be accessed through a virtual private network (VPN). A VPN service allows users to create a secure private network within a public one, thus disguising their country of origin. It’s the same tool that allows file-sharing on Pirate Bay and access to other countries’ Netflix programs.

But even if TikTok was banned in Australia and had access removed, or if users mass-terminated their accounts, existing data on the company’s US and Singapore-based servers would remain there. And we now know these data are accessible to TikTok’s parent company, ByteDance, in Beijing.

What should TikTok users do?

Like any technology, TikTok itself is neither good nor bad. But the way in which it’s used creates potential for both.

The best defence with any potentially dangerous technology is to approach it with healthy scepticism and share as little as possible. In the case of TikTok (and other social media) this may involve:

  • not disclosing your full name
  • not disclosing your age and birthday
  • not disclosing your physical location (including through pictures or video)
  • turning off the “suggest your account to others” setting.

You can also request an account deletion. But don’t expect TikTok to delete all the data associated with it. That’s TikTok’s data now, and you agreed to handing it over when you signed up.

David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

China’s big tech problem: even in a state-managed economy, digital companies grow too powerful

Joanne Gray, University of Sydney and Yi Wang, University of Sydney

China’s digital economy has advanced rapidly over the past two decades, with services, communications and commerce moving online.

The Chinese government has generally encouraged its citizens to accept digital technologies in all aspects of daily life. Today China has around a billion internet users.

China has made clear it aims to be a global leader in digital infrastructure and technologies. Leadership in digital tech has been deemed critical to China’s future economic growth, domestically and internationally.

Like Western countries, China has seen the rise of a handful of dominant digital platform or “big tech” internet companies. We studied China’s recent efforts to regulate these companies, which may hold lessons for Western nations trying to manage their own big tech problems.

China’s ‘big four’ tech companies

China’s biggest tech firms are Baidu, Alibaba, Tencent and Xiaomi (often collectively called BATX for short). Broadly speaking, Baidu is built around search and related services, Alibaba specialises in e-commerce and online retail, Tencent focuses on messaging, gaming and social media, and Xiaomi makes phones and other devices.

Like their Silicon Valley counterparts Google, Amazon, Facebook and Apple (or GAFA), the BATX companies dominate their competitors. This is largely thanks to the enormous network effects and economies of scale in data-driven, online business.

The BATX businesses (again, like GAFA) are also known for gobbling up potential competitors. In 2020, Tencent reportedly made 168 investments and/or mergers and acquisitions in domestic and international companies. Alibaba made 44, Baidu 43 and Xiaomi 70.

The tech crackdown

In the past 18 months or so, the BATX companies have come under increased scrutiny from the Chinese government.

In November 2020, an IPO planned for Ant Group, an affiliate of Alibaba, was effectively cancelled. Ant Group was forced to restructure after Chinese regulators “interviewed” the company’s founder.

The following month, Alibaba’s Ali Investment and Tencent’s Literature Group were fined RMB 500,000 (about A$110,000) each for issues relating to anti-competitive acquisitions and contractual arrangements.

At the same time, China’s General Administration of Market Supervision opened a case against Alibaba for abuse of its dominant market position in the online retail platform services market.

In March 2021 more fines were issued including to Tencent and Baidu. They were fined RMB 500,000 each for anti-competitive acquisitions and contractual arrangements.

Then in April 2021, Chinese authorities met with 34 platform companies, including Alibaba and Tencent, to provide “administrative guidance sessions” for internet platforms. That month Alibaba was also fined a spectacular RMB 18.228 billion (around A$4 billion) and Tencent another RMB 500,000 for anti-competitive practices.

In July 2021, Chinese authorities prohibited a merger between two companies that would have further consolidated Tencent’s position in the gaming market.

The government’s efforts are ongoing. Earlier this week, regulators imposed new fines on Alibaba, Tencent and others for violating anti-monopoly rules about disclosing certain transactions.

What’s motivating Chinese authorities to intervene?

The evolution of China’s digital giants shows how data-driven markets work on a “winner takes all” basis in both state-managed and capitalist economies.

The BATX companies now wield significant social and economic power in China. This conflicts with China’s ideological commitment to state-managed social order.

In January 2022, President Xi Jinping called for stronger regulation and administration of China’s digital economy. The goal, he said, was to guard against “unhealthy” development and prevent “platform monopoly and disorderly expansion of capital”.

State-orchestrated social order is not possible where there is an excessive accumulation of private power.

China’s digital policy agenda is designed to achieve strong economic growth. However, the Chinese Communist Party also seeks to maintain strong state control over the structure and function of digital markets and their participants to ensure they operate according to Chinese values and Chinese Communist Party objectives.

What can we learn from China’s approach to ‘big tech’?

How can we regulate digital platforms, particularly to improve competition and public oversight? This remains a largely unsolved public policy challenge.

Australia and the EU, like China, have demonstrated significant willingness to take up this challenge.

In Europe, for example, where the US platforms dominate, policymakers are actively seeking to achieve independence from foreign technology companies. They are doing this by improving their own domestic technology capacities and imposing rules for privacy, data collection and management, and content moderation that align with European values and norms.

While the EU and China are aiming at very different goals, both are willing to take a significant role in regulating digital platforms in accordance with their stated economic, political and social values.

This stands in stark contrast to the situation in the US, which has so far had little appetite for meaningfully restricting the behaviour of tech companies.

In theory, China’s centralised political power gives it space to try different approaches to platform regulation. But it remains to be seen whether Chinese authorities can successfully overcome the tendency for monopolies to form in digital markets.

If China succeeds, there may be valuable lessons for the rest of the world. For now we must wait and watch.

Joanne Gray, Lecturer in Digital Cultures at The University of Sydney, University of Sydney and Yi Wang, Early Career Researcher and Sessional Academic in Creative Industries, Digital Platforms and Knowledge Exchange, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sendit, Yolo, NGL: anonymous social apps are taking over once more, but they aren’t without risks

Shutterstock
Alexia Maddox, RMIT University

Have you ever told a stranger a secret about yourself online? Did you feel a certain kind of freedom doing so, specifically because the context was removed from your everyday life? Personal disclosure and anonymity have long been a potent mix laced through our online interactions.

We’ve recently seen this through the resurgence of anonymous question apps targeting young people, including Sendit and NGL (which stands for “not gonna lie”). The latter has been installed 15 million times globally, according to recent reports.

These apps can be linked to users’ Instagram and Snapchat accounts, allowing them to post questions and receive anonymous answers from followers.

Although they’re trending at the moment, it’s not the first time we’ve seen them. Early examples include ASKfm, launched in 2010, and Spring.me, launched in 2009 (as “Fromspring”).

These platforms have a troublesome history. As a sociologist of technology, I’ve studied human-technology encounters in contentious environments. Here’s my take on why anonymous question apps have once again taken the internet by storm, and what their impact might be.

A series of screens advertising various features of the 'NGL' app.
The app NGL is targeted at ‘teens’ on the Google app store. Screenshot/Google Play Store

Why are they so popular?

We know teens are drawn to social platforms. These networks connect them with their peers, support their journeys towards forming identity, and provide them space for experimentation, creativity and bonding.

We also know they manage online disclosures of their identity and personal life through a technique sociologists call “audience segregation”, or “code switching”. This means they’re likely to present themselves differently online to their parents than they are to their peers.

Digital cultures have long used online anonymity to separate real-world identities from online personas, both for privacy and in response to online surveillance. And research has shown online anonymity enhances self-disclosure and honesty.

For young people, having online spaces to express themselves away from the adult gaze is important. Anonymous question apps provide this space. They promise to offer the very things young people seek: opportunities for self-expression and authentic encounters.

Risky by design

We now have a generation of kids growing up with the internet. On one hand, young people are hailed as pioneers of the digital age – and on they other, we fear for them as its innocent victims.

A recent TechCrunch article chronicled the rapid uptake of anonymous question apps by young users, and raised concerns about transparency and safety.

NGL exploded in popularity this year, but hasn’t solved the issue of hate speech and bullying. Anonymous chat app YikYak was shut down in 2017 after becoming littered with hateful speech – but has since returned.

A screenshot of a Tweet from @Mistaaaman
Anonymous question apps are just one example of anonymous online spaces. Screenshot/Twitter

These apps are designed to hook users in. They leverage certain platform principles to provide a highly engaging experience, such as interactivity and gamification (wherein a form of “play” is introduced into non-gaming platforms).

Also, given their experimental nature, they’re a good example of how social media platforms have historically been developed with a “move fast and break things” attitude. This approach, first articulated by Meta CEO Mark Zuckerberg, has arguably reached its use-by date.

Breaking things in real life is not without consequence. Similarly, breaking away from important safeguards online is not without social consequence. Rapidly developed social apps can have harmful consequences for young people, including cyberbullying, cyber dating abuse, image-based abuse and even online grooming.

In May 2021, Snapchat suspended integrated anonymous messaging apps Yolo and LMK, after being sued by the distraught parents of teens who committed suicide after being bullied through the apps.

Yolo’s developers overestimated the capacity of their automated content moderation to identify harmful messages.

In the wake of these suspensions, Sendit soared through the app store charts as Snapchat users sought a replacement.

Snapchat then banned anonymous messaging from third-party apps in March this year, in a bid to limit bullying and harassment. Yet it appears Sendit can still be linked to Snapchat as a third-party app, so the implementation conditions are variable.

Are kids being manipulated by chatbots?

It also seems these apps may feature automated chatbots parading as anonymous responders to prompt interactions – or at least that’s what staff at Tech Crunch found.

Although chatbots can be harmless (or even helpful), problems arise if users can’t tell whether they’re interacting with a bot or a person. At the very least it’s likely the apps are not effectively screening bots out of conversations.

Users can’t do much either. If responses are anonymous (and don’t even have a profile or post history linked to them), there’s no way to know if they’re communicating with a real person or not.

It’s difficult to confirm whether bots are widespread on anonymous question apps, but we’ve seen them cause huge problems on other platforms – opening avenues for deception and exploitation.

For example, in the case of Ashley Madison, a dating and hook-up platform that was hacked in 2015, bots were used to chat with human users to keep them engaged. These bots used fake profiles created by Ashley Madison employees.

What can we do?

Despite all of the above, some research has found many of the risks teens experience online pose only brief negative effects, if any. This suggests we may be overemphasising the risks young people face online.

At the same time, implementing parental controls to mitigate online risk is often in tension with young people’s digital rights.

So the way forward isn’t simple. And just banning anonymous question apps isn’t the solution.

Rather than avoid anonymous online spaces, we’ll need to trudge through them together – all the while demanding as much accountability and transparency from tech companies as we can.

For parents, there are some useful resources on how to help children and teens navigate tricky online environments in a sensible way.

Alexia Maddox, Research Fellow, Blockchain Innovation Hub, RMIT, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.