Wednesday, June 29, 2016
Monday, June 27, 2016
Towards the future of technology for education: my NMC keynote
http://ift.tt/28Xv6Wu
On June 16th I gave the closing keynote to the New Media Consortium’s annual conference. It was a big talk, with tons of images, ranting, and ideas crammed into a very busy hour.
It meant a great deal to me to address an organization which meant so much. I cut loose in this talk, making 95% of it new just for the occasion, taking a lot of risks and challenging the audience. I’d like to share recordings and material here for your use and/or feedback. So sit back and watch, listen, or read.
Here’s the NMC’s video recording:
And here are the prepared remarks. I riffed on them at points, which you can see in the video above. I’ve added several of the images as I referred to them directly, plus a very short bibliography at the very end:
“It is a signal honor to address an organization – a community – that has meant so much to me for more than a decade. NMC is a source of inspiration, learning, challenges, and many friendships. In honor of the futures work long conducted by the NMC, allow me to take you on a futuring journey for the next hour
Here’s my plan, what we’ll be exploring:
- Some quick introductory notes
- The short- term future
- Some medium-term futures
- Towards the longer term
- What to do
1. Introductory notes
I’m going to focus this talk on the ways technology might develop in the future. This entails a risk, that of technological determinism. This assumes that technological developments drive some non-technological changes – for our purposes, to education and society. Think of how train tracks and rolling stock can enable yet constrain human actions. A related assumption: people will keep developing and playing with tech. More simply put: I’ll take the persistent drive for technological invention seriously.
I won’t be talking much about Black Swans, like a possible Singularity, or airborne Ebola, or a WWI-scale disaster, or everyone’s favorite, the zombie apocalypse. Also, I won’t dwell on most non-technological contexts (economics, policy, demographics), unusually for me.
Is the future we’re making a good one or a bad time? Americans like to see technologies and futures in terms of starkly opposed utopian and dystopian poles. I’d like the make things more nuanced, stretching futures across a utopia – reality – dystopia spectrum.
Two guides will help us forward, starting with history. We have a good sense, now, about how humans tend to create and react to new technologies, and we can extrapolate from that knowledge. Our second guide is science fiction, which informs much of today’s talk. Not only has sf been giving us visions of possible futures for more than a century, in addition to offering cognitive tools for imagining the future, but technologists and designers are increasingly influenced by what sf has already imagined. In short, if you’re not reading science fiction, you’re not ready for the rest of the 21st century.
2. Short term, to 2021
We are living through a remarkable time, when revolutions are rippling through traditional education. An unprecedented boom in human creativity thanks to the digital revolution is returning storytelling and story-sharing capabilities to people around the world. And powerful changes in economics, demographics, and globalization, not to mention technology, are reshaping education. Some of schooling as we know it might not survive the decade.
Technological development rushes on. VR in now in place, with applications in gaming, storytelling, and visualization. Watch the costs drop and accessibility rise. Content is starting to appear. AR is developing broadly, for basic visualizations across many different hardware platforms. What’s next? AR and VR connect and intertwine, as the digital and nondigital worlds are thoroughly interlaced. Think Mixed Reality. Think computing in space. Watch Microsoft Hololens and Magic Leap.
Meanwhile, 3d printing is growing rapidly. In education, we’ve seen it move from engineering to libraries. Think: 3d printing across the curriculum. 3d printing is also allied to new learning spaces. A DiY ethos contributes to the growth of Makerspaces and the Maker movement.
Those spaces and technologies link up with the often-heralded transition from consumption to co-creation and production, which continues. Think: student as producer, student as maker.
Meanwhile, hardware continues to shrink, as Moore’s Law keeps on going. For example, my alma mater, UM, produced a combination camera, data storage, and Wifi connection the size of a grain of rice – last year. Let’s assume hardware keeps shrinking. This will let us embed hardware throughout our environment. It will let us do more with projected displays, flexible interfaces. Contact lenses as interfaces could well appear. Mark Weiser’s dreams of ubiquitous computing are coming true.
One way of describing this world of small, embedded, invisible, and environmental hardware is the Internet of Things. This is already occurring through an enormous infrastructure build out, including: expanding into the IPv6 internet protocol; developing new middleware, OSes; building out data ownership and control systems. This should lead us to rethink privacy, data ownership and control, safety tradeoffs, and the public/ private dynamic.
At a technical level, will we rethink what a file is? Imagine an ecosystem mostly composed of streams, not documents in directories; points and flows, not files.
Will there be hyperlinks in the internet of everything? What happens to the web in a world of ubiquitous, often invisible computing? There are many incentives to not develop the web. For example, mobile apps, streaming video, AAA video games, the LMS, paywalls all offer alternatives to the open web of Sir Tim Berners-Lee’s invention. Perhaps the web of 2021 will become like US community tv, trawled by a few humans and increasing #s of AIs. Or perhaps, as Kevin Kelly suggests, we’ll see the IoE hyperlinked and Googleable. Perhaps we’ll improve our ability to search and link across time, connecting to a site’s prior states, hyperlinking the emerging history of the web.
While we shrink some hardware devices, we send others into the air. Drones are changing public and private spaces, around the world. There are peaceful uses for delivery, photography, research, art. Some hobbyists have figured out how to add new devices to drones, such as shotguns and chainsaws. Others, like the United States Pentagon, have created still more uses in war and espionage. Drones were once largely controlled; now some are semi-autonomous, or autonomous, acting on their own. Already ethicists and insurance companies debate the implications of drone crimes, asking who’s responsible for injuries and deaths at the metaphorical hands of a literal machine. And automating jobs: Japanese firm Komatsu uses drones on construction projects to feed data to automated trucks and digging machines.
So many future trends are historical trends that won’t die, or seem to cease only to lurch back into life later on. Some of you may remember p2p architectures dating back to the 1990s. Blockchain is a new realization of that concept. Not only has blockchain led to bitcoin, an interesting, messy, and potentially transformative financial development, but now, through Ethereum, supports decentralized autonomous organizations (DAO): distributed, automated enterprises. One such already functions as a fundraising and fund dispersal firm.
Meanwhile, for the next five years let’s expect more of the boring old stuff: social media, crowdsourcing, crowdfunding, open source, data analytics, mobile computing, gaming, gamification, virtualization, digitization, digital storytelling, always-on media capture, always-on surveillance, hacking… There’s more, of course. There always is.
That’s all in the short term. The next 5 years. We already know all about this stuff.
3. Medium term
Let’s look ahead 10 years. To 2026.
Facebook is already looking ahead to that point, and planning. Note what they want to nail down by then:
Automation: so to get to 2026, let’s just assume progress, and let’s consider artificial intelligence. Not at the level of a cataclysmic, world-rebooting Singularity. Just extrapolations of current trends, along the lines marked out by McAfee and Brynjolfson. I’ll assume Moore’s law continues., and add in that quantum computing starts to appear at consumer and enterprise levels. We start talking about a Fourth Industrial Revolution. Let’s grant further, steady growth in deep learning and advanced neural networks. Count Google’s victory over the game of Go as a milestone, and Siri’s uncanny abilities as a baseline.
Then we have to rethink how we design the digital world. Maybe all of it. How does more advanced AI force us to reconceive data standards and publication, information architecture, archiving, for starters?
As it advances, AI starts taking up human functions. We humans generate a vast and growing horde of data; this is fodder for machines. Projects are appearing every day to take advantage of improving machine analysis, like http://americangut.org , which aims to improve your health by diving deeply into your guts to better understand their microbial life. We’ve already seen criminal analytics automated – which already has problems. Machine to machine functions keep rising, such as high frequency trading, which has already advanced beyond regulators’ abilities to constrain. Already we’ve seen flash crashes, economic incidents, driven by the conversation among programs.
Looking ahead to 2026, imagine increasing segments of human life automated as machine-to-machine functions. We could see the emergence of a posthuman order in our lives.
Let’s add robots to the mix, since automation means both AI and robots. The combination is extending into more human labor functions. This can supplement labor shortfalls (Japan, China) or replace capital with labor (everywhere). Robots + AI + 3dprinting could mean deglobalization, as we relocalize production, especially through customization and creativity.
More: we’re seeing the development of affective, emotional computing, as the Horizon Report notes. For example, we could develop emotional analysts. When will they be at par with a human baseline of emotional assessment? When will they go beyond, and how do we handle that? On another line, what does good machine translation do to professional translators and second language teaching? If we combine automation with the IoE and MR, should we anticipate the appearance of intelligence, even sentient tools?
Today we’re seeing the automation of more job functions and entire jobs. Sometimes they replace human functions, physical or mental, sometimes through expert systems. Since 1990, for the first time in centuries, automation outmodes jobs without creating new ones, perhaps leading to rising unemployment. Imagine a 2026 with persistent 10% or 20% unemployment. What does education mean in such a world?
We’re also seeing the development of automated creativity. Already operational in writing (finance, sports, weather) and images. This image is a screen cap from a neutral net recreating a classic movie – 2001 – on its own terms:
This next image was created by Google’s DreamDream, which turned my original photo of our pre conference session into mild psychedelia:
We’re also seeing automated assistants. For example, tools for analyzing one’s writing, which can help us edit and revise more effectively – without a teacher. We’ve seen IBM’s Watson help point to new avenues of medical research, and legal AIs help with document analysis. By 2026 will we see an AI acknowledged, or even credited as coauthor for a scholarly article?
How should we expect creativity itself to change with automation? The history of human interaction with technology suggests we should, as humans love to revise old forms and create new ones with each invented medium. So look to new ways of making art, different forms of storytelling, fresh takes on gaming, and, maybe, new forms of creativity in 2026 we lack the words to describe in 2016.
Hang on. There are plenty of reasons to resist such an automation-shaped scenario.
Objection: Humans want contact!
Answer: except when we don’t. Introverts overdose daily on human contact. People don’t necessarily prefer human interaction for unpleasant tasks. Geeks and increasing geeky culture famously are comfortable with computer-mediated experience. Generally speaking, younger folks are happier with the digital than their elders.
Objection: Automation is too expensive!
Answer: capital continues to accumulate in this economy. That’s one part of rising inequality (cf Thomas Piketty’s R>g equation). And technology prices drop, historically.
Objection: I’m scared of machines doing bad things to me and my children!
Answer: what happens when the machines are safer and better than humans? Think of self-driving cars, while human drivers murder tens of thousands each year. Or robots in hospitals, where human accidents kill 100s of thousands every year.
4. Long term
Let’s look ahead even further. Try 2050. And let’s be open to the full range of possibilities.
What’s happening in the long range horizon is truly disruptive. We’re seeing grand challenges loom like science fiction plotlines. The specter of automation threatens to radically reformat the world of work and society, changing the world our students will inhabit while supplanting teaching and learning. And that’s just for starters.
Consider the new silicon order. Let’s consider different ways AI could unfold. Nick Bostrom at Oxford has done speculative research into the different ways AI could grow and shape the world, ranging from benign to malign to simply strange. Stephen Hawking wants us of proceeding too quickly, of allowing a dangerous force to erupt across our deeply networked world; imagine how much more threatening his warning becomes in an IoE world. There’s the dystopia of a world ruled by inhuman AI, like the classic movie The Forbin Project. Then there’s the utopian vision of Iain Banks. Imagine benign, grand, and administrative AI that simply works to improve human life. That’s a continuum of silicon-ordered 2050s.
Consider the new social order. Given sufficient automation, how do humans organize together in post-2016 forms? We might not see new jobs appear. Income inequality could accelerates to 19th-century levels. In which case, we could see two new worlds of work.
On the one hand, the mass of humans work part time at low wages, living at a subsistence level, otherwise engaged and entertained by a rich and endless digital environment. Above them are the 1%, often deeply skilled, the owners and managers of the new digital order. There isn’t much middle class between them. Call it the new Gilded Age, or neofeudalism.
On the other hand, automation unleashes a new era in human prosperity, of digital delights and technology-enabled offline goods. New political regulations and social orders transfer enough wealth to the majority of people to enable them to lead rich and rewarding lives, which combine productive work with reflective leisure – what one British organization half-jokingly referred to as Fully Automated Luxury Communism. Again, That’s a continuum of 2050s.
Perhaps we combine and synthesize these movements. Technology doesn’t replace humans, but extends and enriches us. We work and play in ever-closer relationship to the digital world. We are both metaphorically and literally cyborgs.
Let’s go further. These technological advances let us hacking life. At the same time as we develop silicon technology we apply digital tools and concepts to the biological realm. New tools, like CRISPR, give us the ability to shape offspring – to edit life – with increasing precision and power. Open source biology gives new insights into life forms – and shares that knowledge widely. Consider a recent paper in World Neurosurgery, “Brainjacking: implant security issues in invasive neuromodulation”. Or consider another paper, on creating macromolecules to reduce the spread of infection within a body.
The new humanity: consider more deeply what happens when we apply these technologies to humanity. What happens to our sense of what it means to be human?
What we think of as “human” may change beyond recognition. We’re already there in 2016 with bionics and widespread, legal, even mandated psychopharmaceuticals. We’re experimenting with brain-controlled machines. nutraceuticals. We’re starting to print tissue and organ replacements. Precision medicine via bioinformatics, new imaging technologies, and nanotech medicine are coming on line. New devices give some measure of sight to some blind people. A Stony Brook team used targeted light to alter acetylcholine in the brains of mammals, removing some emotional memories. We can conceive of editing human DNA via CRISPR and gene driving. Some populations live decades longer than they did just 2 generations ago; if life extension becomes even basically successful, by 2050 will we see 100 become the new 60? Meanwhile, biological indicators are increasingly used in security: retina scanning, gait recognition.
With such innovations, after such knowledge, what happens to our sense of what it means to be human?
How does public health change? Does health care become the leading American industry? What’s the public interest in editing people’s minds and bodies?
Beyond human life, we could experience a new nature. As one marine biologist, Ruth Gates, explains her new role: “Really, what I am is a futurist,.. Our project is acknowledging that a future is coming where nature is no longer fully natural.” None of our technological innovations occurs in a vacuum. As we alter life and grow the digital world, we also alter the earth. As we change humanity, we alter nature. We may, by 2050, speak of a new Earth.
Already some use the term Anthropocene to describe the planet after the year 1900. The Northwest passage is now open. Multiple nations are engaged in a geopolitical rush for the north polar region, which is now opening up into a new world.
That’s just the start. What happens when snows and permafrost retreat northwards, opening up lands for farming? When hot climates turn arid and desertification begins? Do more cities become like Las Vegas, artificial creations maintained solely by massive infrastructural investment? When do people flee such cities? What changes will occur in the planetary ecosystem when we produce hybrid and novel forms of life?
In a parallel to the transformation wrought by infusing human bodies and societies with increasing numbers of machines, what happens to the natural world when that world is suffused with small, networked, data-gathering devices? What happens to the thin layer of life wrapping the Earth’s rocky mantle when we achieve nanotechnology at industrial scale, or nanotechnology at consumer scale? Will digital connectivity laminate or subsume the biosphere?
In one of his novels Iain Banks describes the infusion of computation into the world through tiny, networked devices. Others have used the term computronium to name the new material that results. Banks coins the sharper word Smatter (smart matter). By 2050, will we produce smatter in labs? Or in garages? Or in forests?
What would we call this world, revised by humans and post human technologies? Donna Haraway offers the maybe tongue-in-cheek term “Cthulhucene”.
By 2050, in short, we are hacking the world. Humans change humans, humans change the world, the new world changes humans, and so it goes. By 2050 we’ve hacked the world, and keep on doing so.
Should we envision this as a renaissance? Perhaps this new world is one where human creativity and identity is reborn through an expansion of our powers and capacities, fraught with all kinds of dangers and disasters. Perhaps 2050 is a time of human rebirth.
Maybe a new politics appears by 2050. Think of this combination: drones, perhaps perpetually aloft thanks to solar power, with big data, IOE-based surveillance, and data analytics backed by AI could yield a dictator’s ecstatic dream of total social control. Does this system elicit a new politics in thirty-five years? Perhaps, for some, they will idolize heroes of our time, like Edward Snowden and Alexandra Elbakyan. Others will abhor them as dangerous criminals. What kind of politics are described by their fans and opponents?
A new politics: for example, in 2016 a proposal appeared for casting some urban areas as Rebel Cities, spaces where surveillance is disallowed. Would such spaces be fruitful ground for shooters like the one in Orlando, as well as for creative expression? Would Rebel Cities descend into chilling cycles of escalating violence and terror, or create new forms of social amity? By 2050 has this range of thinking about surveillance become the new left-right, blue-red political bedrock?
Or, instead, after we hack the earth and transform our population, is our politics described as what Bruce Sterling calls “cities full of old people, afraid of the sky”?
Hang on. What could stop some or all of these developments from happening?
Objection: Moore’s law could slow down or stop, which might ratchets down the pace of technological innovation and production a bit.
Answer: the pace might slow, but the end state still occur. Alternatively, we could shift energies from digital technologies to robotics and quantum computing.
Objection: we could turn our postindustrial economy into one based on the principle of no growth economies . After all, as Edward Abbee famously observed, growth for growth’s sake is the ideology of the cancer call.
Answer: you first. Seriously, try to convince people that they don’t need any more economic growth. Think of the vast equity issues involved in telling the developing world to stop. Or doing this without redistributing wealth.
Objection: a resource crash could knock these futures offline.
Answer: true.
Objection: we could voluntarily stop developing technologies.
Answer: “giving up the gun” rarely works, historically, with the rare exception of state power used against the crossbow.
Objection: a new anti-technological politics could arise, urging us to return to an older form of humanity. NeoLuddites? anti-intellectuals? New Humanists?
Answer: it’s possible, and something to watch for. But too many people see themselves benefitting from technologies. This will take some interesting cultural turning.
Objection: could a religious movement against new technologies arise? Frank Herbert gave us such a vision in his classic novel Dune, where a kind of crusade blocks AIs from working for centuries.
Answer: it’s possible, and something to watch for. But most religions are happy to use the technologies, in the end. So we have to anticipate a new religious movement.
Objection: various Black Swans could occur, such as an extraordinarily massive solar event or EMP strikes from some foe or the clathrate gun firing.
Answer: true. That’s the nature of very unlikely, high impact events. Will our technological society build enough resilience into its new Earth?
But before we leave, let’s go even further.
Imagine 2075.
The humans we knew from the year 2000 are a vanishingly rare type, studied by descendants of anthropologists. Artificial intelligences busily work around and above the globe, redesigning life. The biosphere has gained and lost species and entire biomes. The Earth… is transformed. Education and creativity? something else entirely.
Some inspired and creative AI and semi-human teams launch mixed reality reenactments of life in 2016.
5. What is to be done?
How can we anticipate and act strategically in the face of such potential transformation?
We are so not ready.
We currently suffer under a bad mix: the weird simultaneity of a popular and well-funded embrace of technology with strong anti-scientism and unreason. Academic disciplines are not necessarily prepared (think of how 2008 caught macroeconomics flat-footed, and what 2016 is doing to political science. We are radically divided over what constitutes human nature, as we start to hack it up. In the United States we enjoy political sclerosis and dystopian reaction.
We have many political leaders skeptical of, if not actively opposed to civil liberties in the digital world: Trump, Clinton; Cameron; China’s gamified autocracy. Journalism is less free to report now than it was a decade ago, according to a Reporters Sans Frontières report; Turkey arresting journalists on press freedom day. Meanwhile, American tv “news” is a planetary and historical embarrassment. We maintain a horrible legacy of prejudice restricting human growth and creativity. And inequality is starting to aim for nineteenth-century levels.
So given all of that, what shouldn’t we do?
Don’t think about it.
Evade the issue by thinking of retirement. (Present generations don’t have a good record about leaving a world to the young right now)
What is to be done instead?
The blindingly obvious: collaborate with each other, across institutions, sectors, nations, populations, professions. Work through inter institutional groups (like NMC!). Use social media. Use and be open. Read and watch science fiction.
The not so obvious, and challenging: rethink everything in terms of automation’s possibilities. Think of what can be replaced. Become a cyborg. Use futures methods.
The more challenging: Lead! You’re best placed on campuses and other institutions to inform people in context. Get political. Imagine different worlds and inhabiting them – yourselves, your institution, your children and the generation to come.
You. Help. Make. The. Future.
It isn’t something just done to you, delivered like gifts from a cargo cult. You help make the future.
Every decision you make contributes. When you craft a creative work, or teach in a certain way, or nudge a campus in one direction, or support a political candidate, or tell a story, or dream out loud, or influence younger folks, you help co-create what is coming next. Don’t be passive – it’s too late! You’re already making it happen. You are all – each of you – practicing futurists and world-makers. Do so with open eyes, and the flame of creative possibility roaring in your heart.
Thank you.”
SELECTED SOURCES
Renata Avila, “Ciudades Rebeldes – hacia una red global de barrios y ciudades rechazando la vigilancia“.
Erik Brynjolfsson and Andrew McAfee, The Second Machine Age.
Andrea Castillo, “Can a Bot Run a Company?”
Alison Cook-Sather, Catherine Bovill, Peter Felten, Engaging Students as Partners in Learning and Teaching: A Guide for Faculty (2014).
Kristi DePaul, “Robot Writers, Open Education, and the Future of EdTech” (2015) .
Lori Dorn, “The First Aerial Illuminated Drone Show in the United States Takes Place Over the Mojave Desert“.
“Fully automated luxury communism: a utopian critique“.
Donna Haraway, “Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble” .
Michio Kaku, Physics of the Future.
Rebecca Keller, “The Rise of Manufacturing Marks the Fall of Globalization.”
Kevin Kelly, The Inevitable .
“Komatsu to use drones for automated digging in the U.S.“
Ray Kurzweil, http://ift.tt/25shdmn
Brooke McCarthy, “Flex-Foot Cheetah”.
Alexis Madrigal, “‘The Future Is About Old People, in Big Cities, Afraid of the Sky’”.
Babak Parviz, “Augmented Reality in a Contact Lens”
Brandt Ranj, “Goldman Sachs says VR will be bigger than TV in 10 years “.
David Rose, Enchanted Objects.
Edward Snowden, “Inside the Assassination Complex“.
Avianne Tan, “Legally Blind 5th Grader Sees Mother for 1st Time Through Electronic Glasses”.
Edutech
via Bryan Alexander http://ift.tt/25FGf1H
June 27, 2016 at 01:02AM
Tuesday, June 21, 2016
In Inside Higher Ed, a profile and some dangerous ideas
http://ift.tt/28NtL3o
Joshua Kim just published an article in Inside Higher Ed. Which isn’t newsworthy in itself, because that one-man writing machine writes *daily* Technology and Educations columns there, and has done so for years. Which is impressive!
This new one is unusual, and for two reasons. One reason is, well, about me, and another is about where academia and our thinking about education are headed.
“Enough about you – let’s talk about me!” Kim begins with an incredibly kind profile of my post-2013 work. It’s staggeringly generous, and I would name my next child “Joshua” if I were still reproducing. It’s uncannily like reading my obituary before dying. Seriously, thank you, Josh, for a very kind gift. I’m not going to quote it here; just read it. (If you’re new to my work, it’s a good introduction. And hello!)
What does it mean to do research in a time characterized by the web and a changing academia? How can not being connected to a single institution – full time – contribute to our collective understanding?
Josh Kim answers this in terms of independence of perspective:
Our higher education community has a self-serving interest in supporting independent scholars… The need for autonomous and independent thought leadership in edtech is particularly acute given the creeping corporatization of postsecondary educational technology.
I agree very much. Although I’m often skeptical of the corporatization of academia argument (it’s often diffuse, ahistorical, and reflects a resistance to looking seriously at finances), he’s spot on here. The American technology press is usually frantically pro-business. Educators all too often adopt business language and approaches, instead of thinking in nonprofit terms. Our neoliberalism times are subsuming too much to the market, as we lose sight of the public good and nonmarket, nonmonetized benefits to individuals and groups.
There’s even more to the independent perspective angle. You see, in American higher education it is very difficult to look at the full range of postsecondary education, since we have many, many incentives to narrow our view.
Consider: working in a single institution full time forces a localization of perspective. I used to teach as a liberal arts college, and that small campus filled my awareness. Course catalogs, the status of buildings and grounds, the nature of the current student body, details of senior administrators’ statements, my budding understanding of the institution’s history, relationships with colleagues: all of this took up many brain cycles.
The institution’s identity as a liberal arts college further biased my viewpoint to that sector. I spent more time thinking about (say) Sewanee, Southwestern, and Davidson than I did about (for example) the University of Virginia or UT Austin. This made all kinds of sense politically and practically.
I was also encouraged to look to the state (Louisiana) and region (the Ark-La-Tex and the American South(east)). Naturally! That was where I lived, where the systems I relied on (economic, political, cultural) were grounded.
Yes, there are certainly countervailing forces and opportunities. Professional identity counts for a lot (there’s nothing regional about the PMLA). Associations – when they’re national and international – cross boundaries. And the web can meant a great deal. But the aforementioned incentives remain very powerful and determine a lot of behavior.
Now, after three years as BAC, my viewpoint is very different. My clients include a wide range of institutions: community colleges, Catholic universities, state colleges, research universities, state systems, online schools, libraries, museums, and, yes, liberal arts colleges. To support these clients I conduct research in a way that reaches the full ambit of American higher education; I must follow each sector and subsector, or else I cannot deliver value. So I track Yale and Harvard, yes, but also read George Lorenzo’s super community college newsletter The Source, follow California’s complex politics of funding its state universities, and so on.
My geographical understanding is also broader, since my work involves a great deal of travel. Although I live in the very small state of Vermont and have several excellent clients here, I also travel pretty extensively. Over the past twelve months I’ve journeyed to Finland, Australia, Malta, and Canada, not to mention Florida, Rhode Island, California, Michigan, Ohio, Texas, Pennsylvania, Washington D.C., Colorado, Indiana, Wisconsin, Massachusetts, New York, and Virginia. I have no choice but to think nationally and internationally. This has benefits for my research, especially as Americans tend not to notice other nations’ education systems. I think it benefits my clients, too.
There are costs to this approach, of course. At worst I have gained breadth at the expense of depth. I do not, cannot, know an institution as well as one who works there full time. I have to learn as much as I can about each, and try to shed light on them by comparison to other campuses.
Perhaps more urgently, this approach has the serious risk of fragility. So far we’ve been able to do this, as Josh Kim suggests:”diligently put away some dollars in when business is robust, and keep his overall costs down”. Yet one or two catastrophic accidents can blow that away. Being offline for a month due to a heart attack (to pick one statistically not improbably event for a man my age) could devastate the business, both financially (depending on how insurances play out) and in terms of time (lost contacts, lost connections to clients, lost business). Environmental factors, such as a financial crisis, carry equal risks. Institutions can buffer one against these; not in my situation.
So I operate without a net and win certain advantages thereby. I hope to keep working those benefits and sharing the results for a long time to come.
Thank you, Josh, for the fine article and prompt.
PS: I’m very pleased by Rodney Hargis’ #FTTE shout out:
Nice piece on @bryanalexander, but no mention of #FTTE – Bryan Alexander and the Independent Scholar http://ift.tt/24ZE0Uu … #edutech
Edutech
via Bryan Alexander http://ift.tt/25FGf1H
June 21, 2016 at 03:49AM
Thursday, June 16, 2016
Ed-Tech and the Commercialization of School
http://ift.tt/261p62C
I was invited to speak this evening to Alec Couros and Katia Hildebrandt’s class on current ed-tech issues, #ECI830. As part of the course, students are engaging in a “Great Ed-Tech Debate,” arguing one side or another of a variety of topics: that technology enhances learning, that technology is a force for equity, that social media is ruining childhood, and so on. Tonight’s debate: “Public education has sold its soul to corporate interests in what amounts to a Faustian bargain.” Here are some of the remarks I made to the class about commercialization and education technology.
Ed-tech is big business. I’ll start with some numbers: According to one market analyst firm, the ed-tech market totaled $8.38 billion in the 2012–13 academic year. 2015 was a record year for ed-tech investment, with some $2.98 billion in venture capital going to startups in the industry. Companies and venture capitalists alike see huge opportunities for what they insist will be a growing market: last year, McKinsey called education a $1.5 trillion industry. One firm predicted that the “smart education and learning market” will grow from $105.23 billion in 2015 to $446.85 billion by 2020. Testing and assessment are the largest category of this market. Testing and assessment remain the primary reason why schools buy computers; these are also the primary purposes for which teachers say they use new technologies in their classrooms.
We can’t talk about corporate interests and ed-tech without talking about testing. We can’t talk about corporate interests and ed-tech without talking about politics and policies. Why do we test? Why do we measure? Why has this become big business? Why has this become the cornerstone of education policy?
There’s something about our imagination and our discussion of education technology that, I’d contend, triggers an amnesia of sorts. We forget all history – all history of technology, all history of education. Everything is new. Every problem is new. Every product is new. We’re the first to experience the world this way; we’re the first to try to devise solutions.
So when people say that education technology enables a takeover of public schools by corporate interests, it’s pretty easy to look at history and respond “No. Not true.” Schools have long turned to outside, commercial vendors in order to provide goods and services: pencils, paper, chairs, desks, clocks, bells, chalkboards, milk, crackers, playground equipment, books. But rather than pointing to this and insisting that there’s always been someone selling things to schools and therefore selling to schools is perfectly acceptable, we should look more closely at how the relationship between public schools and vendors has changed over time: what’s being sold, who’s doing the selling, and how all that influences what happens in the classroom and what happens in the stories society tells itself about education. The changes here – to the stories, to the markets – aren’t merely a result of more “ed-tech,” but again, we need to ask if and how and why “ed-tech” might be a symptom of an increasing commercialization of education not just the disease.
Again, when we talk about “ed-tech,” we usually focus on recent technologies. We don’t typically consider the chalkboard, the textbook, the pencil, the window, the photocopier. When we say “ed-tech,” we often mean “computers.” But even then we don’t think of the large mainframe computers and the terminals that students were using in the 1970s, for example. Ed-tech amnesia: we act as though nobody thought about using computers in the classroom until Steve Jobs introduced the iPad, or something. Indeed, a founder of an ed-tech company was recently cited in The New York Times as saying “Education is one of the last industries to be touched by Internet technology,” to which I have to offer an important correction: universities actually helped invent the Internet. (And I want to return to this point in a minute: who do we identify – schools or businesses, the public sector or the private sector – as being the locus of ed-tech “innovation”?)
I am particularly interested in the history of education technologies that emerged before the advent of the personal or mainframe computer, before the Internet, in the early parts of the twentieth century. This is when, for example, we saw the development of educational psychology as a field and in turn the development of educational assessment. This is when the multiple choice test was first developed, as well as the machines that could grade these types of tests. To give you some dates: Frederick Kelly is often credited with the invention of the multiple choice test in 1914; the first US patent for a machine to score this type of test – that is, to detect pencil marks on paper and compare them to an answer key – was filed in 1937. IBM launched a commercial service for a “test scoring machine” that same year.
Speaking of commercial services and commercial interests then, standardized testing was already a big business by the 1920s. Enrollment in public schools was growing rapidly at this time, and these sorts of assessments were seen as more “objective” and more “scientific” than the insights that classrooms teachers – mostly women, of course – could provide. Public schools were viewed as failing – failing to educate, failing to enculturate, failing to produce career and college and military-ready students. (Of course, public schools have always been viewed as failing.) They were deemed grossly inefficient, and politicians and administrators alike insisted that schools needed to be run more like businesses. The theories of scientific management were applied to schools, and “schooling” – the process, the institution – increasingly became viewed as a series of inputs and outputs that could be measured and controlled.
Computers, in many many ways, are simply an extension of this. Learning analytics is often framed as a “hot new trend” in education. But it’s actually quite an old one. Thanks to new technologies, we do have more data now to feed these measurements and assessments.
We also have, thanks to new technologies, a renewed faith in “data” as holding all the answers: the answers to how people learn, the answers to how students succeed, the answers to why students fail, the answers to which teachers improve test scores, the answers to which college majors make the most money, the answers to which TV shows make you smarter or which breakfast cereals makes you dumber, and so on. Again, this obsession with data isn’t new; it’s rooted in part in Taylorism – in a desire for maximized efficiency (which is in turn a desire for maximized cost-savings and maximized profitability).
There’s an inherent conflict, I’d argue, between a culture that demands learning efficiency and a culture that recognizes learning messiness. It’s one of the reasons that schools – public schools – have been viewed as spaces distinct from businesses. Humans are not widgets. The cultivation of a mind cannot be mechanized. It should not be mechanized. Nevertheless, that’s been the impetus – an automation of education – behind much of education technology throughout the twentieth century. The commercialization of education is just one part of this larger ideology.
Alongside the push for more efficiency in education – through technology, through scientific management – has been a call for more competition in education. The Nobel Prize-winning economist Milton Friedman, for example, called for school vouchers in the 1950s, arguing that families should be able to use public dollars to send their children to any school, public or private – one should be “free to choose,” as he put it – and that choice and competition would necessarily improve education. During the latter half of the twentieth century, this idea of competition and of outsourcing gained political prominence. Some schools started to turn to outside vendors for remedial education – to companies like Sylvan Learning, for example. And some schools started to turn to vendors for instruction in specific content areas, such as foreign languages. By the 1990s, companies like Edison were offering “school management” in its entirety as a for-profit business. These were never able to demonstrate that they were better than traditional public schools; often they were much worse.
But as my short history here should underscore, the privatization of all or part of public schools was already well underway, in no small part because of the power of this dominant narrative: that competition and efficiency was the purview of the private sector and was something that the public sector simply couldn’t get right.
No surprise, I suppose, this is the story you hear a lot from today’s technology and education technology entrepreneurs and investors – many of whom are involved politically and financially in “education reform” efforts. It’s as I cited at the outset: there’s almost complete amnesia about the long history of ed-tech and about the role that schools have played in the development of the tech itself and of associated pedagogical practices. (LOGO came from MIT. The web browser came from the University of Illinois. PLATO came from the University of Illinois. TurnItIn came from Berkeley. WebCT came from UBC. Google’s origins are at Stanford. ) Nevertheless, you’ll hear this: “school is broken” – it’s that old story again and again. Tech companies assure us that they’ll fix it. Fixing schools requires “innovation”; “innovation” requires the private sector. “Innovative schools” are the ones that have most successfully adopted business practices – scientific management – and that have bought the most technology.
To reiterate, the problem isn’t simply that schools are spending billions of taxpayer dollars on technology. That is, the problem is not simply that there are businesses that sell products to schools; businesses have always sold products to schools. The problem is that we don’t really examine the ideologies that accompany these technologies. How, for example, do new technologies coincide with ways in which we increasingly monitor and measure students? How do new technologies introduce and reinforce the values of competition, individualism, and surveillance? How do new technologies change the way in which recognize and even desire certain brands in the classroom? How do new technologies – the insistence that we must buy them, we must use them – help to change the purpose of school away from civic goals and towards those defined by the job market? How do new technologies themselves view students as a commercial product?
When I insist that “there’s a history to ed-tech,” some people hear me say “nothing has changed.” But that’s not my message. Ed-tech in 2016 is different than ed-tech in 1916. I mean, clearly the tech is different. But the political and economic power of tech is different too. Some of the biggest names in education philanthropy are technologists: Bill Gates, Mark Zuckerberg. Former members of the US Department of Education now and in the past work for ed-tech companies or as ed-tech investors. And to close with a number that I opened with: last year, one investment analyst firm calculated that $2.98 billion had been invested in ed-tech startups. The money matters. But I’d contend that the narratives that powerful people tell about education and technology might matter even more.
Edutech
via Hack Education http://ift.tt/zsM1Vc
June 14, 2016 at 04:02PM
Friday, June 10, 2016
Tuesday, June 7, 2016
A snapshot of non-dramatic campus economic pressure
http://ift.tt/1ZulDFY
Let me share some stories about higher education from this week. These aren’t technology stories, not futuristic accounts. Instead each anecdote illustrates the enormous financial pressures squeezing most of American colleges and universities. None of them are unusually dramatic: no closures in this post, no queen sacrifices. Just the steady ratcheting up towards crisis.
Item: the University of Massachusetts Boston told 400 adjuncts that they might not be rehired this fall. That is about one third of the campus instructional staff, and more than half of the non-tenured faculty:
There are 1,271 total full- and part-time faculty, according to university officials. About 775 of those are nontenure track, about 400 of whom have received notices that they might not have jobs in the fall.
Note that this comes after fall classes are already on the books. See, things are in flux:
Although many adjuncts have already been scheduled for classes in the fall, the school is reexamining the schedule to meet student demand “most effectively and efficiently,” according to campus spokesman DeWayne Lehman. “We expect that a substantial number of these will be reappointed later in the summer as . . . our funding picture becomes clearer,” he said in a statement.
What’s the cause? Oh my dear readers, you know the drill:
Lehman said Thursday that the campus faces a number of budget uncertainties. The college is not sure how much money it will get from the state this year, trustees have not set the tuition rate for next year, and the campus doesn’t know how many students will enroll, he said.
Item: Chestnut Hill College in Philadelphia is cutting staff pay to avoid layoffs.
Seventeen of the 177-member staff agreed to take a salary cut or a reduced workweek effective July 1, and the college forced six other full-time employees and one part-time employee to reduce their workweeks by up to eight hours, college officials said this week...
The cancellation of raises, which applies to both staff and faculty, allowed the college to close the gap, [spokeswoman Kathleen Spigelmyer] said. Savings from the cuts amounted to about 3.5 percent of the college’s budget, she said.
Good for them on being able to resist riffing people. On the other hand, what a sacrifice. How many schools are now making such a decision?
Item: Vermont’s state legislature gave public eolleges some cash, which is a good thing. However, Vermont is by some measures the least supportive state in the nation for its public system. And this cash is a one-time deal, not a ratchet upwards in the annual budget.
[Jeb Spaulding, chancellor of Vermont State Colleges] says the system has cut more than $2.1 million in payroll costs over the past two years, largely by not filling vacant positions. He says the search for additional savings continues.
What this looks like on the ground:
[Sandy Noyes, a staff assistant for the Humanities and Writing and Literature Departments] has been at Johnson [College] for 23 years, and now serves as vice-chair of the staff union’s bargaining unit. Things have never been opulent at the college, Noyes says. But she says the money situation is tighter now than at any time in the past.
“And now we’ve been cut down to even trying to make sure we don’t buy too many pens or pencils, or too much paper,” Noyes says.
Once more, nothing dramatic. There’s even something good there, amidst the negative.
How much of American education now exists in such financial pressure?
(via Steve Bragaw for one of these)
Edutech
via Bryan Alexander http://ift.tt/25FGf1H
June 7, 2016 at 12:52AM
Wednesday, June 1, 2016
No, Blackboard Report Did Not Conclude That Online Classes Are “A Poorer Experience”
http://ift.tt/1sTfyIV
I’m seeing a lot of chatter online about the recently-released Blackboard report and this slide in particular:
Foundational Insights
- When students take a class online, they make a tacit agreement to a poorer experience which undermines their educational self worth.
- Students perceive online classes as a loophole they can exploit that also shortcuts the “real” college experience.
- Online classes don’t have the familiar reference points of in-person classes which can make the courses feel like a minefield of unexpected difficulties.
- Online students don’t experience social recognition or mutual accountability, so online classes end up low priority by default.
- Students take more pride in the skills they develop to cope with an online class than what they learn from it.
- Online classes neglect the aspects of college that create a lasting perception of value.
To make matters more interesting, the next slide elaborated on insight 1), stating:
Most students who enroll in an online class recognize and express that they are agreeing to a lesser experience.
Did Blackboard just commit an act of unintentional honesty acknowledging that students don’t like online courses in general? That would be quite the headline. But it would not be accurate.
I was quite curious on how to interpret these findings, since the report clearly states that it is based on Contextual Inquiry and Participatory Design aimed at helping Blackboard “empathize with their unique experiences”. But which students did they talk to and should anyone extrapolate these findings beyond the “unique experiences” of the specific students participating in the study? There simply was no context presented in the report to help answer this question. As Peter Shea mentioned in the comments to Mike Caulfield’s post:
This report is interesting. I think there are issues in how methods are portrayed and in the language used in generalizing findings. We know for example that the methods are qualitative and therefore are not intended to be generalizable in any statistical sense.
I had similar questions, so I asked Blackboard for clarification. Katie Blot was kind enough to send me an extensive explanation by email with a phone call follow-up today. While I can’t share all of the details, I will paraphrase. The short answer is they talked to very few students, and it is a mistake to generalize the results. This type of design research is meant to “provoke new product and service design”. The slide shared above states:
Insights are provocations. We synthesize the data we hear in the field into a series of framing statements. Each statement is intended to change the way we think about the academic journey, and to help us consider optimistic futures related to online and hybrid learning.
Katie shared the following statement to provide more detail at my request.
We have done many of these types of studies that include students, faculty and other personas of various demographics on a variety of topics. We do it for internal purposes and, when the team finds insights it thinks may be of interest, we share through our blog. It is also important to note that this is not the only type of research we do. We also do quantitative research on large, statistically significant samples of users to understand behaviors on a larger scale. It is the combination of these two types of research, one that helps us understand what people do and the other that helps us understand why, that uniquely shape our thinking about how we need to scope and design our products and services.
While Blackboard had good intentions by taking their internal design research and sharing it publicly, they made a mistake by not thinking of how the community would interpret the implied results, especially by not addressing the question of sample size or student groups represented. I think there is a misperception in the ed tech community that you either need academic research rigor with p values and removal of statistical bias or you have nothing. I think that this type of public release of research calls for something in between – a description with enough context to allow readers to determine whether to extrapolate or not. Blackboard provided this context on the applied ethnographic nature of the research but not on the very important student sample size and groupings.
For what it’s worth, the students came mostly from traditional age (18-24) at a community college and two research universities. I could see that this research could be generalizable to those institutions or at least to the specific academic programs the students are in, but I am not at liberty to share the school names.
Since Mike Caulfield raised the question, I asked for his reaction to this explanation.
It’s not that far from what I would expect, really. The point with this sort of stuff as far as I understand it is to look for stories that you may be missing, or may not have seen as dominant, and then see if you can see them elsewhere. You’re not trying to prove something, you’re trying to to see what you might be missing about the experience. You use this to build products, and look for your confirmation there.
One of the comments to my post last night came from Karen Swan from U Illinois Springfield, where she gave a very interesting teaser for upcoming research publication.
We have an article coming out in a special issue of Online Learning on learning analytics that used around 650,000 student records from the Predictive Analytics Reporting (PAR) Framework to explore retention and progression in 5 community colleges, 5 four-year universities, and 4 primarily online institutions. We grouped students by how the courses they took in their first 8 months were delivered — only on-ground, only online, or some of both — and found that the only students hurt by taking online courses were community college students taking only online courses. In many cases, however, students taking some of both had better rates of retention and progression across all three institutional types. So it looks like not only are significant numbers of students blending their classes, but it is paying off for them.
I look forward to seeing this article.
In summary:
- Don’t interpret Blackboard’s research to conclude that online learning is a poorer experience in general; instead treat it as a provocative framing statement to help people change how they perceive students’ academic journeys, particularly for Mixed-Course programs (mix of online and face-to-face courses).
- Blackboard should provide more context when they publicly share internal research.
- Nevertheless, this is quite interesting research that frames the student perception of online courses when mixed with face-to-face courses.
The post No, Blackboard Report Did Not Conclude That Online Classes Are “A Poorer Experience” appeared first on e-Literate.
Edutech
via e-Literate http://mfeldstein.com
May 27, 2016 at 08:14AM
"Conservatives" Control Public Higher Education
http://ift.tt/1Z4DOld
Let's hope they take good care of it.
Edutech
via Blog U http://ift.tt/1m4qxoP
May 26, 2016 at 03:03PM