Monday, February 19, 2018

Food: From Micro to Macro

Food: From Micro to Macro

http://ift.tt/2Gr5E82

Headshot 3.13 cropcompressBy Karen Sternheimer

What we eat is deeply personal. It is also connected to our cultural and socio-economic backgrounds. We may seldom think about it, but what we eat has global ramifications.

Sociology teaches us that very few choices we make are only personal. Food literally shapes your personal biology, but the choices we have access to make are shaped by where we live, the groups we are part of, and the policies our lawmakers have made. And all of this cumulatively impacts our environment, locally and globally.

Before going any further, I need say that I am not a paragon of ideal eating by any means, and this post is not intended to persuade you to change your diet. It is instead meant to help us think about something as vital and mundane as eating, and think about the connections between the micro and macro levels of sociological processes.

In fact, there is widespread disagreement about what we should eat at all. A simple online search for “best diet” will yield an overwhelming number of choices, some contradicting one another. Advice on whether you should eat meat, dairy, beans, and wheat changes significantly based on who is giving the advice, with sometimes fear-inducing reasons why some food categories should be eliminated from your diet.

Different countries offer varying dietary guidelines, albeit with quite a bit of overlap in promoting vegetables and fruits. A recent study, reported on by NPR, found that regardless of a country’s dietary guidelines, if people actually followed them there would be significant benefits to the environment: “Greenhouse gas emissions would fall, waterways would suffer less pollution from fertilizer, and less land would be required to feed people.”

Some countries’ guidelines include the environmental benefits of eating more or less of a particular category, but in the U.S. and other countries advice like this can potentially offend major food producing industries, so environmental impact is purposely omitted from dietary recommendations.

For instance, the cattle industry might not like us to reduce our red meat intake, as Oprah Winfrey famously found out after airing an episode of her show about mad cow disease and subsequently getting sued in the 1990s. Some states have food libel laws, which allow industries to sue if their products are publically disparaged.

Food is not just personal or cultural, it is also political and economic. As Marion Nestle, professor of nutrition, food studies, and public health wrote in her book Food Politics: How the Food Industry Influences Nutrition and Health, lobbyists play a major role in governmental recommendations about nutrition. Industries also fund research about health and diet, encouraging the public to focus on the dangers of one food group while ignoring their own, as journalist Gary Taubes writes in The Case Against Sugar. Taubes documents how the sugar industry helped fund research focusing on the alleged dangers of fat (especially animal fat) as the primary cause of obesity, heart disease, diabetes and other health complications and deflect the researchers and the public from looking closely at sugar intake as a potential health problem.

The economics of food also shape the kinds of food choices we have access to. Obviously, personal finances shape what kinds of foods we can afford, and where we can afford to live determines whether we have access to grocery stores at all. Whether in rural or urban areas, low income communities are often considered food deserts, meaning residents lack reliable access to a range of healthy foods (as this recent story about a grocery march details).

And even for people with the resources to purchase and access to healthy food options, what your local grocery store decides to sell is rooted in economic realities. If there is not enough demand for a particular product, a store might not offer it. By contrast, I think about the quinoa in my dinner last night, which a few years ago I never even heard of, let alone ever considered eating. Learning about new foods has perhaps never been easier with the Internet, and globalization makes transporting food around the world easier as well (although there are environmental costs to this).

Economic pressures have led some grocery stores to increase the number of organic products, as competitors like Whole Foods have made their brand about selling organically grown and responsibly-sourced foods. This has led to more investment in organic farming and perhaps a reduction in prices that might encourage more people to buy organic food and more farmers to practice organic farming, which has traditionally been much more expensive.

So, what does this mean for your dinner tonight? Just a chance for you to think sociologically about the food on your plate.





Must-Read

via Everyday Sociology Blog http://ift.tt/XSiGgP

January 29, 2018 at 03:10AM

Sunday, September 17, 2017

It's 87 Degrees And Humid In Jacksonville, And Jaguars Fans Are Swimming In Mayonnaise

A parable about change

A parable about change

http://ift.tt/2ew8kW6

When do we see change happening?

I’d like to misuse a classic science fiction story as a kind of parable.  See what you think.

In 1948 William Tenn published  “Brooklyn Project”, a time travel tale.  In it a dystopian government launches a probe far into the past, to “various periods ranging from fifteen thousand years to four billion years ago.”  The government claims the physical presence of the “photographic and recording device” won’t alter subsequent reality, as the machine will only observe, not interact with anything.  In reality…

“Brooklyn Project” is akin to Ray Bradbury’s greater and more famous “A Sound of Thunder” (1952), but offers a different political emphasis. (In a way Tenn’s story starts where Bradbury’s ends.)

William Tenn, "Brooklyn Project" first page

From its first appearance in Planet Stories, fall 1948.

So: the probe launches itself into the past, then returns to the present, only to fling itself back in time once more, for a total of twenty-five round trips.  With each voyage the story’s present changes.  Its setting, a metallic room, becomes wooden, and has always been so.  Then the Earth has two moons, and has always had the pair.  Next, English disappears as a language, as the government official “had been stating his thoughts by slapping one pseudopod against the other–as he always had…”  And so on.

At the sequence’s end, the present has changed utterly.  The government official concludes his remarks in words that would have made little sense to the present from which the story began:

“–we are indeed ready for refraction. And that, I tell you, is good enough for those who billow and those who snap. But those who billow will be proven wrong as always, for in the snapping is the rolling and in the rolling is only truth. There need be no change merely because of a sodden cilium. The apparatus has rested at last in the fractional conveyance; shall we view it subtly?”

More drastically, human biology has become something completely different, thanks to the time machine’s repeated incursions:

[T]heir bloated purple bodies dissolved into liquid and flowed up and around to the apparatus. When they reached its four squared blocks, now no longer shrilling mechanically, they rose, solidified, and regained their slime-washed forms.

And the state official concludes, “extend[ing] fifteen purple blobs triumphantly. ‘Nothing has changed!'”

What this says about how we view change in the real world is left as an exercise for the reader.



awesome

Edutech

via Bryan Alexander http://ift.tt/25FGf1H

September 3, 2017 at 01:48PM

Thursday, April 27, 2017

The shock of the old: still living in the 20th century

The shock of the old: still living in the 20th century

http://ift.tt/2ow1qqZ

Yesterday I wrote about a day in the life of a futurist like me.  At the post’s end I wonder about the most futuristic parts of the day, and the least.

As I worked on that post, off and on during the day, I couldn’t shake the feeling that I was missing something.  This morning I wanted to pick that intuition up.  Namely, it’s the way daily life in 2017 is still a very 20th-century endeavor, at least seen during that same day in the life.

"The Shock of the Old"I’m fond of David Edgerston’s phrase “the shock of the old.”  That’s from his 2007 book, where he gleefully points out the persistent of older, legacy tech during times we assume are more advanced.  One good example is the widespread use of horses and donkeys for transport during WWII, a conflict universally described as one driven by machines.

Edgerton came to mind yesterday as I drove an automobile largely unchanged since the 1980s over mid-20th-century roads (and in medieval traffic, i.e., Boston).  Intermittent cell phone service knocked me out of the 21st century repeatedly, both outside (Vermont, New Hampshire) and in certain locations within buildings.  I ate trail mix and chips recognizable from the Cold War era.  Dashboard radio crackled news and music much like it did when I was a child (born 1967).

I checked out a physical book from a century-old library, then deposited a physical check to a bank with human tellers.

The two airports I used, Boston Logan and Reagan National, acted in most ways as though it were 1985.  Cockpits largely invisible to mere passengers are more automated, yes, and service is worse.  But we’re still flying jets (mostly) along familiar flight paths, taking off from and landing on well laid runways.  TVs blared their form of mock-journalism – now that content has changed, by declining, and the format has mutated, by being more crowded, but the presentation technology remains.  People still stared at the mounted, public screens.

elevator_National Press Club

A lovely example of an industrial-age invention still in use.

This morning I walked across downtown DC to a meeting, and thought a time traveler from 1980 would largely feel at home.  There are new models of cars, but they’re mostly tweaks on Detroit’s old patterns (very few Teslas visible), and they still halt and fume through the old streets. People still walk, or push strollers.  Helicopters and airplanes occasionally move overhead.  There aren’t any jetpacks, slidewalks, personal helicopters, teleportation booths, suicide booths, or flying cars.  No Segues appeared. Smartphones are the major difference, and they are actually not too visible.

In today’s meeting an audience sits on chairs in rows, listening to speakers speaking from a podium.

And so on.  You get the idea.  It is vital for futurists – i.e., anyone thinking of what’s to come – to always bear in mind the past’s firm grip.  While we rightly identify possible changes and new arrivals, we can’t lose sight of what persists.

(previous old-shock posts: on tv ads, on election news; on the new Star Wars movie’s fiercely retro nature)






Edutech

via Bryan Alexander http://ift.tt/25FGf1H

April 26, 2017 at 03:29AM

Monday, April 17, 2017

The Germs give out the telephone number of a drug dealer on KROQ radio, 1979

The Germs give out the telephone number of a drug dealer on KROQ radio, 1979

http://ift.tt/2p28Odj


 
One of the best DJs in American history was Rodney Bingenheimer, whose show Rodney on the ROQ was an important force in bringing punk acts to a wider audience in southern California in the late 1970s. Rodney once described his programming philosophy as “anti-Eagles, anti-beards.”

On November 30, 1979, the Germs joined Rodney in the studio for an hour or so of utterly sophomoric fun. The Germs’ only studio album, (GI), had come out a few weeks earlier; the guys make fun of the producer of the album, Joan Jett, saying that her contribution was “sleeping on the couch.”

The general immaturity of the Germs is fully matched by the callers. Right after a guy calls in just to say “Punk rockers have a 10-inch cock,” another dude calls in wanting to know who this band is. The answer given is “Led Zeppelin.” A few minutes later and they’re reading “satellite numbers” on the air, which was a way you could make free long-distance calls. It’s bullshit but this was just the kind of thing that could have landed KROQ in hot water.

Much of the time Rodney is reading plugs for upcoming gigs, which are just mouthwatering. Bands include the Go-Gos, the Busboys, the Plimsouls, Sham 69, Dead Kennedys, Fear, the Bags, X, and Black Flag.

Around the 32nd minute a woman named Michelle calls the show from the Whiskey, where Madness is playing. One of the gang has some urgent information for her: “Snickers has some really good pot for sale, call 312-960-3662. It might be 714 area code.”

Back in the day, there weren’t very many area codes so it would be assumed that Snickers has a 213 area code, which covered all of downtown Los Angeles, unless otherwise specified. 714 covered Orange County.

I called both of the numbers. They were disconnected. Oh well.
 

 
Germs play the Whiskey on December 23, 1979:

 

Previously on Dangerous Minds:
‘The New Wave’: dorky Hollywood ’77 report features the Germs & Rodney Bingenheimer
‘Product of America’: Members of the Germs and Meat Puppets resurrect a Phoenix punk band from 1978

Posted by Martin Schneider





Must-Read

via Dangerous Minds http://ift.tt/lCDIPD

April 11, 2017 at 06:10AM

Punycode Phishing Attack fools even die-hard Internet veterans

Friday, February 3, 2017

Ed-Tech in a Time of Trump

Ed-Tech in a Time of Trump

http://ift.tt/2k6RoZW

This talk was delivered at the University of Richmond. The full slide deck can be found here.

Thank you very much for inviting me to speak here at the University of Richmond – particularly to Ryan Brazell for recognizing my work and the urgency of the conversations that hopefully my visit here will stimulate.

Hopefully. Funny word that – “hope.” Funny, those four letters used so iconically to describe a Presidential campaign from a young Illinois Senator, a campaign that seems now lifetimes ago. Hope.

My talks – and I guess I’ll warn you in advance if you aren’t familiar with my work – are not known for being full of hope. Or rather I’ve never believed the hype that we should put all our faith in, rest all our hope on technology. But I’ve never been hopeless. I’ve never believed humans are powerless. I’ve never believed we could not act or we could not do better.

There were a couple of days, following our decision about the title and topic of this keynote – “Ed-Tech in a Time of Trump,” when I wondered if we’d even see a Trump presidency. Would some revelation about his business dealings, his relationship with Russia, his disdain for the Constitution prevent his inauguration? Should we have been so lucky, I suppose. Hope.

The thing is, I’d still be giving the much the same talk, just with a different title. “A Time of Trump” could be “A Time of Neoliberalism” or “A Time of Libertarianism” or “A Time of Algorithmic Discrimination” or “A Time of Economic Precarity.” All of this is – from President Trump to the so-called “new economy” – has been fueled to some extent by digital technologies; and that fuel, despite what I think many who work in and around education technology have long believed – have long hoped – is not necessarily (heck, even remotely) progressive.

I’ve had a sinking feeling in my stomach about the future of education technology long before Americans – 26% of them, at least – selected Donald Trump as our next President. I am, after all, “ed-tech’s Cassandra.” But President Trump has brought to the forefront many of the concerns I’ve tried to share about the politics and the practices of digital technologies. I want to state here at the outset of this talk: we should be thinking about these things no matter who is in the White House, no matter who runs the Department of Education (no matter whether we have a federal department of education or not). We should be thinking about these things no matter who heads our university. We should be asking – always and again and again: just what sort of future is this technological future of education that we are told we must embrace?

Of course, the future of education is always tied to its past, to the history of education. The future of technology is inexorably tied to its own history as well. This means that despite all the rhetoric about “disruption” and “innovation,” what we find in technology is a layering onto older ideas and practices and models and systems. The networks of canals, for example, were built along rivers. Railroads followed the canals. The telegraph followed the railroad. The telephone, the telegraph. The Internet, the telephone and the television. The Internet is largely built upon a technological infrastructure first mapped and built for freight. It’s no surprise the Internet views us as objects, as products, our personal data as a commodity.

When I use the word “technology,” I draw from the work of physicist Ursula Franklin who spoke of technology as a practice: “Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters,” she wrote. “Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset.” “Technology also needs to be examined as an agent of power and control,” Franklin insisted, and her work highlighted “how much modern technology drew from the prepared soil of the structures of traditional institutions, such as the church and the military.”

I’m going to largely sidestep a discussion of the church today, although I think there’s plenty we could say about faith and ritual and obeisance and technological evangelism. That’s a topic for another keynote perhaps. And I won’t dwell too much on the military either – how military industrial complexes point us towards technological industrial complexes (and to ed-tech industrial complexes in turn). But computing technologies undeniably carry with them the legacy of their military origins. Command. Control. Communication. Intelligence.

As Donna Haraway argues in her famous “Cyborg Manifesto,” “Feminist cyborg stories have the task of recoding communication and intelligence to subvert command and control.” I want those of us working in and with education technologies to ask if that is the task we’ve actually undertaken. Are our technologies or our stories about technologies feminist? If so, when? If so, how? Do our technologies or our stories work in the interest of justice and equity? Or, rather, have we adopted technologies for teaching and learning that are much more aligned with that military mission of command and control? The mission of the military. The mission of the church. The mission of the university.

I do think that some might hear Haraway’s framing – a call to “recode communication and intelligence” – and insist that that’s exactly what education technologies do and they do so in a progressive reshaping of traditional education institutions and practices. Education technologies facilitate communication, expanding learning networks beyond the classroom. And they boost intelligence – namely, how knowledge is created and shared.

Perhaps they do.

But do our ed-tech practices ever actually recode or subvert command and control? Do (or how do) our digital communication practices differ from those designed by the military? And most importantly, I’d say, does (or how does) our notion of intelligence?

“Intelligence” – this is the one to watch and listen for. (Yes, that’s ironic that “ed-tech in a time of Trump” will be all about intelligence, but hear me out.)

“Intelligence” means understanding, intellectual, mental faculty. Testing intelligence, as Stephen Jay Gould and others have argued, has a long history of ranking and racism. The word “intelligence” is also used, of course, to describe the gathering and assessment of tactical information – information, often confidential information, with political or military value. The history of computing emerges from cryptography, tracking and cracking state secrets. And the word “intelligence” is now used – oh so casually – to describe so-called “thinking machines”: algorithms, robots, AI.

It’s probably obvious – particularly when we think of the latter – that our notions of “intelligence” are deeply intertwined with technologies. “Computers will make us smarter” – you know those assertions. But we’ve long used machines to measure and assess “intelligence” and to monitor and surveil for the sake of “intelligence.” And again, let’s recall Franklin’s definition of technologies includes not just hardware or software, but ideas, practices, models, and systems.

One of the “hot new trends” in education technology is “learning analytics” – this idea that if you collect enough data about students that you can analyze it and in turn algorithmically direct students towards more efficient and productive behaviors, institutions towards more efficient and productive outcomes. Command. Control. Intelligence.

And I confess, it’s that phrase “collect enough data about students” that has me gravely concerned about “ed-tech in a time of Trump.” I’m concerned, in no small part, because students are often unaware of the amount of data that schools and the software companies they contract with know about them. I’m concerned because students are compelled to use software in educational settings. You can’t opt out of the learning management system. You can’t opt out of the student information system. You can’t opt out of required digital textbooks or digital assignments or digital assessments. You can’t opt out of the billing system or the financial aid system. You can’t opt of having your cafeteria purchases, Internet usage, dorm room access, fitness center habits tracked. Your data as a student is scattered across multiple applications and multiple databases, most of which I’d wager are not owned or managed by the school itself but rather outsourced to a third-party provider.

School software (and I’m including K–12 software here alongside higher ed) knows your name, your birth date, your mailing address, your home address, your race or ethnicity, your gender (I should note here that many education technologies still require “male” or “female” and do not allow for alternate gender expressions). It knows your marital status. It knows your student identification number (it might know your Social Security Number). It has a photo of you, so it knows your face. It knows the town and state in which you were born. Your immigration status. Your first language and whether or not that first language is English. It knows your parents’ language at home. It knows your income status – that is, at the K–12 level, if you quality for a free or reduced lunch and at the higher ed level, if you qualify for a Pell Grant. It knows if you are the member of a military family. It knows if you have any special education needs. It knows if you were identified as “gifted and talented.” It knows if you graduated high school or passed a high school equivalency exam. It knows your attendance history – how often you miss class as well as which schools you’ve previously attended. It knows your behavioral history. It knows your criminal history. It knows your participation in sports or other extracurricular activities. It knows your grade level. It knows your major. It knows the courses you’ve taken and the grades you’ve earned. It knows your standardized test scores.

Obviously it’s not a new practice to track much of that data, and as such these practices are not dependent entirely on new technologies. There are various legal and policy mandates that have demanded for some time now that schools collect this information. Now we put it in “the cloud” rather than in a manila folder in a locked file cabinet. Now we outsource this to software vendors, many of whom promise that because of the era of “big data” that we should collect even more information about students – all their clicks and their time spent “on task,” perhaps even their biometric data and their location in real time – so as to glean more and better insights. Insights that the vendors will then sell back to the school.

Big data.

Command. Control. Intelligence.

This is the part of the talk, I reckon, when someone who speaks about the dangers and drawbacks of “big data” turns the focus to information security and privacy. No doubt schools are incredibly vulnerable on the former front. Since 2005, US universities have been the victim of almost 550 data breaches involving nearly 13 million known records. We typically think of these hacks as going after Social Security Numbers or credit card information or something that’s of value on the black market.

The risk isn’t only hacking. It’s also the rather thoughtless practices of information collection, information sharing, and information storage. Many software companies claim that the data that’s in their systems is their data. It’s questionable if much of this data – particularly metadata – is covered by FERPA. As such, student data can be sold and shared, particularly when the contracts signed with a school do not prevent a software company from doing so. Moreover, these contracts often do not specify how long student data can be kept.

In this current political climate – ed-tech in a time of Trump – I think universities need to realize that there’s a lot more at stake than just financially motivated cybercrime. Think Wikileaks’ role in the Presidential election, for example. Now think about what would happen if the contents of your email account was released to the public. President Trump has made it a little bit easier, perhaps, to come up with “worse case scenarios” when it comes to politically-targeted hacks, and we might be able to imagine these in light of all the data that higher ed institutions have about students (and faculty).

Again, the risk isn’t only hacking. It’s amassing data in the first place. It’s profiling. It’s tracking. It’s surveilling. It’s identifying “students at risk” and students who are “risks.”

Several years ago – actually, it’s been five or six or seven now – when I was first working as a freelance tech journalist, I interviewed an author about a book he’d written on big data and privacy. He made one of those casual remarks that you hear quite often from people who work in computing technologies: privacy is dead. He’d given up on the idea that privacy was possible or perhaps even desirable; what he wanted instead was transparency – that is, to know who has your data, what data, what they do with it, who they share it with, how long they keep it, and so on. You can’t really protect your data from being “out there,” he argued, but you should be able to keep an eye on where “out there” it exists.

This particular author reminded me that we’ve counted and tracked and profiled people for decades and decades and decades and decades. In some ways, that’s the project of the Census – first conducted in the United States in 1790. It’s certainly the project of much of the data collection that happens at school. And we’ve undertaken these practices since well before there was “big data” or computers to collect and crunch it. Then he made a comment that, even at the time, I found deeply upsetting. “Just as long as we don’t see a return of Nazism,” he joked, “we’ll be okay. Because it’s pretty easy to know if you’re a Jew. You don’t have to tell Facebook. Facebook knows.”

We can substitute other identities there. It’s easy to know if you’re Muslim. It’s easy to know if you’re queer. It’s easy to know if you’re pregnant. It’s easy to know if you’re Black or Latino or if your parents are Syrian or French. It’s easy to know your political affinities. And you needn’t have given over that data, you needn’t have “checked those boxes” in your student information system in order for the software to develop a fairly sophisticated profile about you.

This is a punch card, a paper-based method of proto-programming, one of the earliest ways in which machines could be automated. It’s a relic, a piece of “old tech,” if you will, but it’s also a political symbol. Think draft cards. Think the slogan “Do not fold, spindle or mutilate.” Think Mario Savio on the steps of Sproul Hall at UC Berkeley in 1964, insisting angrily that students not be viewed as raw materials in the university machine.

The first punch cards were developed to control the loom, industrializing the craft of weaving women around 1725. The earliest design – a paper tape with holes punched in it – was improved upon until the turn of the 19th century, when Joseph Marie Jacquard first demonstrated a mechanism to automate loom operation.

Jacquard’s invention inspired Charles Babbage, often credited with originating the idea of a programmable computer. A mathematician, Babbage believed that “number cards,” “pierced with certain holes,” could operate the Analytical Engine, his plans for a computational device. “We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves,” Ada Lovelace, Babbage’s translator and the first computer programmer, wrote.

But it was Herman Hollerith who invented the recording of data on this medium so that it could then be read by a machine. Earlier punch cards – like those designed by Jacquard – were used to control the machine. They weren’t used to store data. But Hollerith did just that. The first Hollerith card had 12 rows and 9 columns, and data was recorded by the presence or absence of a hole at a specific location on a card.

Hollerith founded The Tabulating Machine Company in 1896, one of four companies consolidated to form Computing-Tabulating-Recording Company, later renamed the International Business Machines Corporation. IBM.

Hollerith’s punch card technology was first used in the US Census in 1890 to record individual’s traits – their gender, race, nationality, occupation, age, marital status. These cards could then be efficiently sorted to quantify the nation. The Census was thrilled as it had taken almost a decade to tabulate the results of the 1880 census, and by using the new technology, the agency saved $5 million.

Hollerith’s machines were also used by Nicholas II, the czar of Russia for the first (and only) census of the Russian Imperial Empire in 1897. And they were adopted by Hitler’s regime in Germany. As Edwin Black chronicles in his book IBM and the Holocaust,

When Hitler came to power, a central Nazi goal was to identify and destroy Germany’s 600,000-member Jewish community. To Nazis, Jews were not just those who practiced Judaism, but those of Jewish blood, regardless of their assimilation, intermarriage, religious activity, or even conversion to Christianity. Only after Jews were identified could they be targeted for asset confiscation, ghettoization, deportation, and ultimately extermination. To search generations of communal, church, and governmental records all across Germany – and later throughout Europe – was a cross-indexing task so monumental, it called for a computer. But in 1933, no computer existed.

What did exist at the time was the punch card and the IBM machine, sold to the Nazi government by the company’s German subsidiary, Dehomag.

Hitler’s regime made it clear from the outset that it was not interested in merely identifying those Jews who claimed religious affiliation, who said that they were Jewish. It wanted to be able to find those who had Jewish ancestry, Jewish “blood,” those who were not Aryan.

Hitler called for a census in 1933, and Germans filled out the census on pen and paper – one form per household. There was a census again in 1939, and as the Third Reich expanded, so did the Nazi compulsion for data collection. Census forms were coded and punched by hand and then sorted and counted by machine. IBM punch cards and IBM machines. During its relationship with the Nazi regime – one lasting throughout Hitler’s rule, throughout World War II – IBM derived about a third of its profits from selling punch cards.

Column 22 on the punch card was for religion – punched at hole 1 to indicate Protestant, hole 2 for Catholic, hole 3 for Jew. The Jewish cards were processed separately. The cards were sorted and indexed and filtered by profession, national origin, address, and other traits. The information was correlated with other data – community lists, land registers, medical information – in order to create a database, “a profession-by-profession, city-by-city, and indeed a block-by-block revelation of the Jewish presence.”

It was a database of inference, relying heavily on statistics alongside those IBM machines. This wasn’t just about those who’d “ticked the box” that they were Jewish. Nazi “race science” believed it could identify Jews by collecting and analyzing as much data as possible about the population. “The solution is that every interesting feature of a statistical nature … can be summarized … by one basic factor,” the Reich Statistical Office boasted. “This basic factor is the Hollerith punch card.”

Command. Control. Intelligence.

The punch card and the mechanized processing of its data were used to identify Jews, as well as Roma and other “undesirables” so they could be imprisoned, so their businesses and homes could be confiscated, so their possessions could be inventoried and sold. The punch card and the mechanized processing of its data was used to determine which “undesirables” should be sterilized, to track the shipment of prisoners to the death camps, and to keep tabs on those imprisoned and sentenced to die therein. All of this recorded on IBM punch cards. IBM machines.

The CEO of IBM at this time, by the way: Thomas Watson. Yes, this is who IBM has named their “artificial intelligence” product Watson after. IBM Watson, which has partnered with Pearson and with Sesame Street, to “personalize learning” through data collection and data analytics.

Now a quick aside, since I’ve mentioned Nazis.

Back in 1990, in the early days of the commercialized Internet, those heady days of Usenet newsgroup discussion boards, attorney Mike Godwin “set out on a project in memetic engineering.” Godwin felt as though comparisons to Nazis occurred too frequently in online discussions. He believed that accusations that someone or some idea was “Hitler-like” were thrown about too carelessly. “Godwin’s Law,” as it came to be known, says that “As an online discussion grows longer, the probability of a comparison involving Hitler approaches 1.” Godwin’s Law has since been invoked to decree that once someone mentions Hitler or Nazis, that person has lost the debate altogether. Pointing out Nazism online is off-limits.

Perhaps we can start to see now how dangerous, how damaging to critical discourse this even rather casual edict has been.

Let us remember the words of Supreme Court Justice Robert Jackson in his opening statement for the prosecution at the Nuremburg Trials:

What makes this inquest significant is that these prisoners represent sinister influences that will lurk in the world long after their bodies have returned to dust. We will show them to be living symbols of racial hatreds, of terrorism and violence, and of the arrogance and cruelty of power. … Civilization can afford no compromise with the social forces which would gain renewed strength if we deal ambiguously or indecisively with the men in whom those forces now precariously survive.

We need to identify and we need to confront the ideas and the practices that are the lingering legacies of Nazism and fascism. We need to identify and we need to confront them in our technologies. Yes, in our education technologies. Remember: our technologies are ideas; they are practices. Now is the time for an ed-tech antifa, and I cannot believe I have to say that out loud to you.

And so you hear a lot of folks in recent months say “read Hannah Arendt.” And I don’t disagree. Read Arendt. Read The Origins of Totalitarianism. Read her reporting from the Nuremberg Trials.

But also read James Baldwin. Also realize that this politics and practice of surveillance and genocide isn’t just something we can pin on Nazi Germany. It’s actually deeply embedded in the American experience. It is part of this country as a technology.

Let’s think about that first US census, back in 1790, when federal marshals asked for the name of each head of household as well as the numbers of household members who were free white males over age 16, free white males under 16, free white females, other free persons, and slaves. In 1820, the categories were free white males, free white female, free colored males and females, and slaves. In 1850, the categories were white, Black, Mulatto, Black slaves, Mulatto slaves. In 1860, white, Black, Mulatto, Black slaves, Mulatto slaves, Indian. In 1870, white, Black, Mulatto, Indian, Chinese. In 1890, white, Black, Mulatto, Quadroon, Octoroon, Indian, Chinese, Japanese. In 1930, white, Negro, Indian, Chinese, Japanese, Filipino, Korean, Hindu, Mexican.

You might see in these changing categories a changing demographic; or you might see this as the construction and institutionalization of categories of race – particularly race set apart from a whiteness of unspecified national origin, particularly race that the governing ideology and governing system wants identified and wants managed. The construction of Blackness. “Census enumeration is a means through which a state manages its residents by way of formalized categories that fix individuals within a certain time and a particular space,” as Simone Browne writes in her book Dark Matters: On the Surveillance of Blackness, “making the census a technology that renders a population legible in racializing as well as gendering ways.” It is “a technology of disciplinary power that classifies, examines, and quantifies populations.”

Command. Control. Intelligence.

Does the data collection and data analysis undertaken by schools work in a similar way? How does the data collection and data analysis undertaken by schools work? What bodies and beliefs are constituted therein? Is whiteness and maleness always there as “the norm” against which all others are compared? Are we then constructing and even naturalizing certain bodies and certain minds as “undesirable” bodies and “undesirable” minds in the classroom, in our institutions by our obsession with data, by our obsession with counting, tracking, and profiling?

Who are the “undesirables” of ed-tech software and education institutions? Those students who are identified as “cheats,” perhaps. When we turn the cameras on, for example with proctoring software, those students whose faces and gestures are viewed – visually, biometrically, algorithmically – as “suspicious.” Those students who are identified as “out of place.” Not in the right major. Not in the right class. Not in the right school. Not in the right country. Those students who are identified – through surveillance and through algorithms – as “at risk.” At risk of failure. At risk of dropping out. At risk of not repaying their student loans. At risk of becoming “radicalized.” At risk of radicalizing others. What about those educators at risk of radicalizing others. Let’s be honest with ourselves, ed-tech in a time of Trump will undermine educators as well as students; it will undermine academic freedom. It’s already happening. Trump’s tweets this morning about Berkeley.

What do schools do with the capabilities of ed-tech as surveillance technology now in the time of a Trump? The proctoring software and learning analytics software and “student success” platforms all market themselves to schools claiming that they can truly “see” what students are up to, that they can predict what students will become. (“How will this student affect our averages?”) These technologies claim they can identify a “problem” student, and the implication, I think, is that then someone at the institution “fixes” her or him. Helps the student graduate. Convinces the student to leave.

But these technologies do not see students. And sadly, we do not see students. This is cultural. This is institutional. We do not see who is struggling. And let’s ask why we think, as the New York Times argued today, we need big data to make sure students graduate. Universities have not developed or maintained practices of compassion. Practices are technologies; technologies are practices. We’ve chosen computers instead of care. (When I say “we” here I mean institutions not individuals within institutions. But I mean some individuals too.) Education has chosen “command, control, intelligence.” Education gathers data about students. It quantifies students. It has adopted a racialized and gendered surveillance system – one that committed to disciplining minds and bodies – through our education technologies, through our education practices.

All along the way, or perhaps somewhere along the way, we have confused surveillance for care.

And that’s my takeaway for folks here today: when you work for a company or an institution that collects or trades data, you’re making it easy to surveil people and the stakes are high. They’re always high for the most vulnerable. By collecting so much data, you’re making it easy to discipline people. You’re making it easy to control people. You’re putting people at risk. You’re putting students at risk.

You can delete the data. You can limit its collection. You can restrict who sees it. You can inform students. You can encourage students to resist. Students have always resisted school surveillance.

But I hope that you also think about the culture of school. What sort of institutions will we have in a time of Trump? Ones that value open inquiry and academic freedom? I swear to you this: more data will not protect you. Not in this world of “alternate facts,” to be sure. Our relationships to one another, however, just might. We must rebuild institutions that value humans’ minds and lives and integrity and safety. And that means, in its current incarnation at least, in this current climate, ed-tech has very very little to offer us.





Edutech

via Hack Education http://ift.tt/zsM1Vc

February 2, 2017 at 01:35PM