Week 6 Notes

Week 6 Notes!
Here is a link to Tuesday's lecture slides:

https://zulip-uploads.s3.amazonaws.com/42198/h_Lg99IKbGL8guHRMveZpahT/ICS-3-Week-6-Tuesday-Public.pdf?AWSAccessKeyId=AKIAIEVMBCAT2WD3M5KQ&Signature=FSQiYQCHowvSqw1%2BjoaCHjAx6d8%3D&Expires=1651863915

Here is a link to Thursday's lecture slides:

https://zulip-uploads.s3.amazonaws.com/42198/meOqhBKhOKqsUn6ipgy7gfOn/ICS-3-Week-6-Thursday.pdf?AWSAccessKeyId=AKIAIEVMBCAT2WD3M5KQ&Signature=10FLFtqVtRMVjW4MgF1TFK828wk%3D&Expires=1651863917

Announcements

 * Website Check-in
 * Not necessary to edit website in order to get note-taking credit
 * Graded on conversations in Zulip

Flash

 * Most people familiar with flash
 * Flash is a browser plugin that could run interactive, animated scripted applications
 * “Basically a combination of movie, interactive multimedia kind of tool to make graphics, texts, and link things together [...] something a lot of people tried to do at the time on the internet that HTML couldn’t provide”
 * Can create animated graphics and text easily (offers functions that are not supported by HTML)
 * Created with the Flash development program
 * Can almost be considered as an animation tool with support for scripting languages
 * Requires flash player plugin to be installed on client PC
 * Was uploaded to a web server and embedded in page HTML
 * Video: homestarrunner.com (youtube.com/watch?v=R9Fl0_p-Uo8)
 * Children's book character featured in comedy shorts and flash games
 * Homestarrunner.com was one of the most popular sites on the internet from 2002-2005
 * Highly interactive: users could send emails to characters and they would respond
 * Adobe Flash Player 32 can run downloaded flash files
 * Flash has no longer been supported in browsers since January of 2021
 * Adobe itself stopped supporting flash player on December 31, 2020 as well

Final Project

 * Due: End of Day, Tuesday, June 7th
 * Late submissions: 5 point deduction
 * Workload: ~2 midterms (but group work allowed, hence less work individually)
 * Groups allowed (Maximum 4 people per group)
 * Look in the #introductions stream to find classmates sharing your interests
 * Start to form groups during week 7
 * Format: Supporting artifact summarizing your report → Slides, poster, website etc, can use images, videos, audios
 * 1000-1500 word report and should be in a Document format
 * Integration of previous work is okay, but no copying entire midterms for the final

Report Expectations

 * Well-written, clearly formatted/organized
 * Clear connection to material and prompt
 * Introduces new information

Artifact Expectations

 * Summarize main points of report
 * Communicate through multimedia: video, audio, images, etc.
 * Can be a website/wiki, using the course website is allowed

Prompts
Speculative: Historical: Personal:
 * Web 4.0: Where are we headed as an interconnected global community?
 * What is the metaverse and is it the future?
 * What is [insert topic] and how has it influenced today’s Internet? (Limit topic to before 2016)
 * Examples: Arpanet, The well, Browser wars
 * How has [insert topic] influenced your life and the lives of your peers?

Diversity

 * The presence and acknowledgement of differences that may include:
 * Race
 * Gender
 * Religion
 * Sexual orientation
 * Ethnicity/nationality
 * Socioeconomic status
 * (Dis)ability
 * Age
 * and Political perspective

Equity

 * The promotion of justice, impartiality, and fairness within the boundaries of institutions or systems
 * Equity is reached through understanding the root causes of outcome disparity

Inclusion

 * Ensure everyone feels welcome
 * Outcomes more likely when those that are diverse participate in the decisions and development of institutions or systems

“The Algorithm”

 * Most social media platforms deliver information to us through an algorithm
 * Idea: A data set (millions of data points) is classified, then this is added to a machine learning model which takes that data and calculates output based on some input
 * The accuracy of such algorithms vary

Black Box AI

 * Represents the fact that the inner workings of machine learning models/algorithms are rarely known
 * Results are influenced by: Quality of inputs
 * Are the images of the books clear. Easily identifiable?
 * Representational accuracy of inputs → Do the images of books include ALL types of books or only hardback?
 * Accuracy of input classification → Did the humans classifying the inputs get it right?
 * Engineer Demographics → Do the engineers have familiarity with all types or books or only one?

Case Study: Twitter
Twitter released an image cropping algorithm that focused on white faces over Black faces:

https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm

The algorithm automatically chose Mitch McConnell over Barack Obama in an experiment, regardless of tie color/order of pictures Found more instances of bias:
 * Cropped out white or grey hair
 * Crop based on height (lower faces not selected)
 * More likely to ignore people wearing head coverings
 * Preference towards slim, young, bright faces

Class Discussion
Do you think it is intentional? Is there harm?
 * Answer 1: it was not intentional, but no matter who is targeted there will be people critiquing it
 * Answer 2: Little of both, it was somewhat intentional – not racially motivated but trying to keep people on the app
 * Answer 3: No, the algorithm will prefer the majority and will exclude those who are not apart of the majority
 * Answer 4: Not intentional, clearly targets minorities and somebody somewhere will always be affected by this
 * Answer 5: Kind of, algorithm on TikTok prefers prettier and younger people and brings them onto your FYP
 * Baldwin answer: There could be a set of criteria that are reasonable and could have created this inequality
 * Answer 1: Yes. This (Twitter algorithm) may not be the biggest deal, but other instances can have a much greater impact.
 * Answer 2: It can be harmful as it doesn’t create a space for a wide variety of people. Maybe we shouldn’t use these algorithms if they exclude people.

Algorithmic Bias
Systematic errors that create unfair, unequal, and exclusive outcomes Examples:
 * Far more complex than automated photo cropping
 * Amazon recruiting algorithm favors men
 * Courtroom algorithm produced more lenient sentences to white people than black people
 * Mortgage algorithms biased against Latino borrowers
 * Uber facial recognition suspends accounts of transitioning transgender drivers

Socrative Quiz
What can be done to remove algorithmic bias in technological systems like the internet?
 * Make accessible to everyone
 * Impossible without removing the algorithms themselves
 * Transparency on outputs and inputs
 * Including more diversity (in training sets)
 * Remove the bias from the algorithm
 * More research should be done

What can be done?

 * Remove bias from training data
 * Keep training data up to date
 * Publicly share training data, support open data sets
 * Diverse, representative teams

XAI: Explainable AI
Make Black Box systems more understandable to humans by making the output more transparent
 * Why Explain?
 * To justify → Bias discrimination
 * To control → Visibility, identify errors
 * To improve
 * To Discover → leads to understanding of how computation can meet human needs and goals
 * Active Exploration

Show and control

 * Understand how inputs affect outputs
 * Help us identify errors
 * Ex. Twitter: algorithm used for cropping, but a more explainable error would be to tweak
 * For those familiar with programming it is very similar to nested else if statements

Accessible Explanations

 * Interface: explains why an algorithm reached conclusion it reached
 * Model has to change for model to fit interface
 * Example: Banana chosen instead of books, the algorithm sees: “binding of book, yellow with brown wear marks”

Examples of Explanation of Algorithms

 * Active exploration
 * Enable user control over an algorithm to support user understanding
 * Show and Control
 * Reveal the confidence level of a particular decision
 * Depending on the level of confidence, it can offer different options to the user (gives “Did you mean?” options if confidence is not as high)

Accessible Explanations

 * The explanation has to be accessible to the user
 * Ex: “this is a small brown bird with a long tail” → “this is a bush wren”

Algorithmic Bias: Challenges
In response to the AI information collected by schools: Would you want to see the data?
 * Companies do not want to share; building data sets is hard
 * Requires millions of items in item set, physical work that has to occur
 * ReCAPTCHA is deployed to crowdsource the building of data sets
 * Should we have access to that data?
 * Power imbalance between scientists, policymakers, and end users
 * Policymakers don’t know whole process of scientists (neither do end users)
 * Institutions are highly motivated to implement AI systems
 * Reduces workforce, decisions automated, audiences reached
 * Example: AI used to keep track of where students have been
 * If UCI was doing this, would we want access to that data?
 * Answer 1: Why do they want this data? Are they going to sell this information?
 * Answer 2: This is a privacy breach. The students should be allowed to determine what is done with information collected on them.
 * Yes, if it is collected on me as a student.

Content Moderation

 * Humans can monitor the algorithmic bias.
 * Having a human available can resolve issues as they come up (example: Apple Card credit limit discrepancies between genders)
 * Costs of Human Moderation on the Internet
 * Psychological burden
 * What content is being moderated?
 * Undervalued Work = Low Wages
 * For example a Facebook content moderator’s average salary was 28,000, compared to $240K for the average Facebook employee (source)
 * Outsourced Overseas = poor working conditions
 * High burnout rate
 * Some people would have thousands of decisions to make in one day which is not efficient or healthy for the worker.
 * Moderators have to simultaneously make decisions about content while weighing against company policy. Each company has their own policies which can make it tough.
 * Examples of ways organizations have moved to improve human content moderation
 * “YouTube switched to more human content moderation from AI moderation because it was having errors and was flagging/taking down videos for hate speech and things of the sort inaccurately” - someone on Slack
 * Facebook is hiring another 1,000 people to review and remove ads on their platform
 * Better working conditions for WhatsApp employees (30 minute wellness breaks per 8 hour shift and access to wellness coaches)

Ways to improve Human Moderation

 * Better pay
 * It’s a very difficult job, so better pay can alleviate some of that burden
 * Elevate the relevance and importance of moderators
 * Perception that those who are paid less are less important to the organization
 * They need better pay and more representation
 * Labor Organization
 * Mental Health Support
 * Ways to integrate algorithms to work with humans to reduce burdens
 * Public Policy?

Critical Access
Each year, more of the global community comes online… for a good reason. We need to do something

Benefits of Internet access

 * Access to more and lower cost goods
 * Find something cheaper online than in person
 * Multiple ways to communicate with others
 * More options to communicate allow engagement with other people in the world
 * Higher rates of employment
 * Higher rates of income
 * Concrete, tangible outcomes to learning how to use system
 * Convenience, time (banking, shopping, entertainment)
 * Lot easier to do many things (and is more accessible to people who struggle with these tasks)
 * Increased social capital
 * Creating identities online = more well-known, bigger network which leads to employment opportunities
 * Remote access to education and learning
 * Zoom
 * Has helped a lot during the pandemic

What can you do?
Automation and moderation are critical to ensuring that the Internet remains a diverse, equitable, and inclusive system; In a people driven system, people hold the power (eg. people fuel Twitter, Amazon, etc for them to be successful)
 * Accessible thinking
 * Using accessible tools to best of ability
 * Consider the policies and practices of the Internet-based tools that you use
 * Are policies making situations worse?
 * Choose Internet-based tools that align with the ideals that motivate you
 * Things that you care about; align with individual ideals
 * Support tools, platforms, and organizations that work to enforce DEI
 * Student Responses
 * Bringing together like minded people to combat