This is a copy of my notes after reading up on Human Technology Symbiosis from Monday, initial impressions and thoughts, a breakdown of the group discussion will be in the weekly reflection post.
I broke down the major points for the portion I read based on the headers of each section.
Meaningful Human Control
- I interpreted this as the idea that people should be making the meaningful / tough decisions
- "humans in the loop" seemed to be the general idea
- transparent interfaces make clear what decisions were made and why they happened, keeping people accountable
- if an interface knows who made a decision, responsibility can be assigned to specific individuals
- that being said, how much should tech decide? is there a threshold of seriousness before a human should take over? Who decides where that threshold is?
- plus, technology fails, what happens when theres an error when an ethical choice has to be made?
- as cool as transparency is, there's a lot of information
- constantly being pinged for decisions can make it stressful to use tech
- some form of prioritisation is inevitably going to have to exist
Humane Digital Intelligence
- supports and respects individual and social life (what does this mean? My perception of social life is meeting friends for lunch once every three months, should ask the humans at the tables in class)
- I guess the above could mean moving away from modern platforms where "engagement" is king, so they try to keep you on there as long as possible
- respects human rights...
- a tough one, who's to say the people using it won't subvert human rights?
- plenty of countries "reinterpret" what rights belong to everyone
- where I'm from discrimination based on race is normal, but because it's far reaching government policies that disadvantage me as a minority, or everyday people being rude, dismissive or mad when they see me, and not something as severe as direct violence, the government can be said to adhere to most human rights, even if the quality of life is demonstrably lower for certain citizens
- policies that target minorities even unintentionally, by making connections their creators make, just enforce inequality
- in the future more rights may be declared universal too
- the right to access the internet possibly
- the right to specific medical treatment and procedures that we cant even name atm
- what happens if someones insurance claim for an impairing condition has to be approved? not every health case is the same
Adaptation and Personalisation
- software that works specifically for you, and no one else, tailored to your taste and habits
- some technology has databases of archetypes of users....
- what happens when the tech thinks you are a type of person you aren't, and catalogs / treats you differently because of that?
- e.g. Youtube clogging your entire feed with videos similar to ones you've watched recently, some of which might be of lesser quality and have inaccurate information (incidentally the source of several major conspiracy theories)
- the extreme variety of use cases, you can't make something that works for everyone
- perhaps more options or customisation is the way of the future
- how "general" or specialised is the technology suppposed to be?
- Technology has to suppport problem solving, memory and decision making
- essentially making up for things in such a way that it saves the users' time and brings value to their lives
- What if tech is designed to support people to make a decision, but that decision is the wrong one?
- some forums and communities are blamed for the incitement of violent events, is it the fault of the technology and / or it's creators for being vulnerable enough to be subverted and misused
- Technology understanding peoples reactions and feelings
- could this mean that tech will have to know specific emotions and interpretations of them?
- how would tech adapt to a sad, angry or happy person?
- raises the possibillity that tech could be used to manipulate people, because it changes something to match people's reactions
- a double edged instrument, especially when the mechanics are unclear
- compliance standards and rigorous testing are necessary to make sure the tech doesn't outright endanger people
- a recent example... the scandal of the 737 MAX
- maybe technology needs to have a built in component that allows for anonymous flagging of decisions that endanger lives?
- most technology isn't simply going to be proliferated immediately
- care and consideration needs to go into how it's going to be used, and the 'public face' of the technologies
- perceptions and assumptions will be hard to change
- people are justifiably anxious and scared
- individuals making decisions on technology need to be held accountable
- Introducing technology too early resulting in a bad first step forward that sours people to new tech
- Zoomers absorb and accept tech differently than their elder peers (related, are we zoomers? are we the people that accept tech the most or is it one generation forward?)
- making sure tech isn't too easily accepted is important too... don't want people to blindly accept whatever tech they get for convenience
- imagine the government pushing out a software update that censors all mention of potatos from the internet, and no one questions it
- the division of tech and worlds getting smaller
- people stop interacting with things they don't like and end up in an echo chamber
- an attitude of rejecting new tech, even though technlogy enables a lot of modern life
Reviewing the article overall... I certainly double taked a few times due to the fact that I misinterpreted the article... the wording and complexity of the prose meant that reading it the first couple times only exposed me to it... I'd argue I'm still trying to understand it now. There's perhaps something to be said about needing to get used to the ... ornamentation of the writing and how long it takes to get to the message... but perhaps the issue with understanding it lies in my command of the language rather than the way the paper was written.