Category: Cultural News

What’s behind phantom cellphone buzzes?

Have you ever experienced a phantom phone call or text? You’re convinced that you felt your phone vibrate in your pocket, or that you heard your ring tone. But when you check your phone, no one actually tried to get in touch with you.

You then might plausibly wonder: “Is my phone acting up, or is it me?”

Well, it’s probably you, and it could be a sign of just how attached you’ve become to your phone.

At least you’re not alone. Over 80 percent of college students we surveyed have experienced it. However, if it’s happening a lot – more than once a day – it could be a sign that you’re psychologically dependent on your cellphone.

There’s no question that cellphones are part of the social fabric in many parts of the world, and some people spend hours each day on their phones. Our research team recently found that most people will fill their downtime by fiddling with their phones. Others even do so in the middle of a conversation. And most people will check their phones within 10 seconds of getting in line for coffee or arriving at a destination.

Clinicians and researchers still debate whether excessive use of cellphones or other technology can constitute an addiction. It wasn’t included in the latest update to the DSM-5, the American Psychiatric Association’s definitive guide for classifying and diagnosing mental disorders.

But given the ongoing debate, we decided to see if phantom buzzes and rings could shed some light on the issue.

A virtual drug?

Addictions are pathological conditions in which people compulsively seek rewarding stimuli, despite the negative consequences. We often hear reports about how cellphone use can be problematic for relationships and for developing effective social skills.

One of the features of addictions is that people become hypersensitive to cues related to the rewards they are craving. Whatever it is, they start to see it everywhere. (I had a college roommate who once thought that he saw a bee’s nest made out of cigarette butts hanging from the ceiling.)

So might people who crave the messages and notifications from their virtual social worlds do the same? Would they mistakenly interpret something they hear as a ring tone, their phone rubbing in their pocket as a vibrating alert or even think they see a notification on their phone screen – when, in reality, nothing is there?

A human malfunction

We decided to find out. From a tested survey measure of problematic cellphone use, we pulled out items assessing psychological cellphone dependency. We also created questions about the frequency of experiencing phantom ringing, vibrations and notifications. We then administered an online survey to over 750 undergraduate students.

Those who scored higher on cellphone dependency – they more often used their phones to make themselves feel better, became irritable when they couldn’t use their phones and thought about using their phone when they weren’t on it – had more frequent phantom phone experiences.

Cellphone manufacturers and phone service providers have assured us that phantom phone experiences are not a problem with the technology. As HAL 9000 might say, they are a product of “human error.”

So where, exactly, have we erred? We are in a brave new world of virtual socialization, and the psychological and social sciences can barely keep up with advances in the technology.

Phantom phone experiences may seem like a relatively small concern in our electronically connected age. But they raise the specter of how reliant we are on our phones – and how much influence phones have in our social lives.

How can we navigate the use of cellphones to maximize the benefits and minimize the hazards, whether it’s improving our own mental health or honing our live social skills? What other new technologies will change how we interact with others?

Our minds will continue to buzz with anticipation.

Daniel J. Kruger, Research Assistant Professor, University of Michigan

Photo Credit: ‘Brain’ via http://www.shutterstock.com

You can follow The Systems Scientist on Twitter or Facebook.


Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

Advertisements

Hidden figures: How black women preachers spoke truth to power

Each semester I greet the students who file into my preaching class at Howard University with a standard talk. The talk is not an overview of the basics – techniques of sermon preparation or sermon delivery, as one might expect. Outlining the basics is not particularly difficult.

The greatest challenge, in fact, is helping learners to stretch their theology: namely, how they perceive who God is and convey what God is like in their sermons. This becomes particularly important for African-American preachers, especially African-American women preachers, because most come from church contexts that overuse exclusively masculine language for God and humanity.

African-American women comprise more than 70 percent of the active membership of generally any African-American congregation one might attend today. According to one Pew study, African-American women are among the most religiously committed of the Protestant demographic – eight in 10 say that religion is important to them.

Yet, America’s Christian pulpits, especially African-American pulpits, remain male-dominated spaces. Still today, eyebrows raise, churches split, pews empty and recommendation letters get lost at a woman’s mention that God has called her to preach.

The deciding factor for women desiring to pastor and be accorded respect equal to their male counterparts generally whittles down to one question: Can she preach?

The fact is that African-American women have preached, formed congregations and confronted many racial injustices since the slavery era.

Here’s the history

The earliest black female preacher was a Methodist woman simply known as Elizabeth. She held her first prayer meeting in Baltimore in 1808 and preached for about 50 years before retiring to Philadelphia to live among the Quakers.

An unbroken legacy of African-American women preachers persisted even long after Elizabeth. Reverend Jarena Lee became the first African-American woman to preach at the African Methodist Episcopal (AME) Church. She had started even before the church was officially formed in the city of Philadelphia in 1816. But, she faced considerable opposition.

AME Bishop Richard Allen, who founded the AME Church, had initially refused Lee’s request to preach. It was only upon hearing her speak, presumably, from the floor, during a worship service, that he permitted her to give a sermon.

Lee reported that Bishop Allen,

“rose up in the assembly, and related that [she] had called upon him eight years before, asking to be permitted to preach, and that he had put [her] off; but that he now as much believed that [she] was called to that work, as any of the preachers present.

Lee was much like her Colonial-era contemporary, the famed women’s rights activist Sojourner Truth. Truth had escaped John Dumont’s slave plantation in 1828 and landed in New York City, where she became an itinerant preacher active in the abolition and woman’s suffrage movements.

Fighting the gender narratives

For centuries now, the Holy Bible has been used to suppress women’s voices. These early female black preachers reinterpreted the Bible to liberate women.

Truth, for example, is most remembered for her captivating topical sermon “Ar’nt I A Woman?,” delivered at the Woman’s Rights National Convention on May 29, 1851 in Akron, Ohio.

In a skillful historical interpretation of the scriptures, in her convention address, Truth used the Bible to liberate and set the record straight about women’s rights. She professed:

“Then that little man in black there, he says women can’t have as much rights as men, because Christ wasn’t a woman! Where did your Christ come from? From God and a woman! Man had nothing to do with Him.”

Like Truth, Jarena Lee spoke truth to power and paved the way for other mid- to late 19th-century black female preachers to achieve validation as pulpit leaders, although neither she nor Truth received official clerical appointments.

The first woman to achieve this validation was Julia A. J. Foote. In 1884, she became the first woman ordained a deacon in the African Methodist Episcopal Zion AMEZ Church. Shortly after followed the ordinations of AME evangelist Harriet A. Baker, who in 1889 was perhaps the first black woman to receive a pastoral appointment. Mary J. Small became the first woman to achieve “elder ordination” status, which permitted her to preach, teach and administer the sacraments and Holy Communion.

Historian Bettye Collier-Thomas maintains that the goal for most black women seeking ordination in the late 19th and early 20th centuries was simply a matter of gender inclusion, not necessarily pursuing the need to transform the patriarchal church.

Preaching justice

An important voice was that of Rev. Florence Spearing Randolph. In her role as reformer, suffragist, evangelist and pastor, she daringly advanced the cause of freedom and justice within the churches she served and even beyond during the period of the Great Migration of 20th century.

In my recent book, “A Pursued Justice: Black Preaching from the Great Migration to Civil Rights,” I trace the clerical legacy of Rev. Randolph and describe how her prophetic sermons spoke to the spiritual, social and industrial conditions of her African-American listeners before and during the largest internal migration in the United States.

In her sermons she brought criticism to the broken promises of American democracy, the deceptive ideology of black inferiority and other chronic injustices.

Randolph’s sermon “If I Were White,” preached on Race Relations Sunday, Feb. 1, 1941, reminded her listeners of their self-worth. It emphasized that America’s whites who claim to be defending democracy in wartime have an obligation to all American citizens.

Randolph spoke in concrete language. She argued that the refusal of whites to act justly toward blacks, domestically and abroad, embraced sin rather than Christ. That, she said, revealed a realistic picture of America’s race problem.

She also spoke about gender discrimination. Randolph’s carefully crafted sermon in 1909 “Antipathy to Women Preachers,” for example, highlights several heroic women in the Bible. From her interpretation of their scriptural legacy, she argued that gender discrimination in Christian pulpits illustrated a misreading of scripture.

Randolph used her position as preacher to effect social change. She was a member and organizer for the Woman’s Christian Temperance Union (WCTU), which led in the work to pass the 18th Amendment, which made prohibition of the production, sale and transport of alcoholic beverages illegal in the United States. Her affiliation with the WCTU earned her the title “militant herald of temperance and righteousness.”

Today, several respected African-American women preachers and teachers of preachers proudly stand on Lee’s, Small’s and Randolph’s shoulders raising their prophetic voices.

The Conversation

Kenyatta R. Gilbert, Associate Professor of Homiletics, Howard University

Photo Credit: Lynne Graves

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

America’s always had black inventors – even when the patent system explicitly excluded them

America has long been the land of innovation. More than 13,000 years ago, the Clovis people created what many call the “first American invention” – a stone tool used primarily to hunt large game. This spirit of American creativity has persisted through the millennia, through the first American patent granted in 1641 and on to today.

One group of prolific innovators, however, has been largely ignored by history: black inventors born or forced into American slavery. Though U.S. patent law was created with color-blind language to foster innovation, the patent system consistently excluded these inventors from recognition.

As a law professor and a licensed patent attorney, I understand both the importance of protecting inventions and the negative impact of being unable to use the law to do so. But despite patents being largely out of reach to them throughout early U.S. history, both slaves and free African-Americans did invent and innovate.

Why patents matter

In many countries around the world, innovation is fostered through a patent system. Patents give inventors a monopoly over their invention for a limited time period, allowing them, if they wish, to make money through things like sales and licensing.

Patent Office relief on the Herbert C. Hoover Building.
Neutrality

The patent system has long been the heart of America’s innovation policy. As a way to recoup costs, patents provide strong incentives for inventors, who can spend millions of dollars and a significant amount of time developing a invention.

The history of patents in America is older than the U.S. Constitution, with several colonies granting patents years before the Constitution was created. In 1787, however, members of the Constitutional Convention opened the patent process up to people nationwide by drafting what has come to be known as the Patent and Copyright Clause of the Constitution. It allows Congress:

“To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

This language gives inventors exclusive rights to their inventions. It forms the foundation for today’s nationwide, federal patent system, which no longer allows states to grant patents.

Though the language itself was race-neutral, like many of the rights set forth in the Constitution, the patent system didn’t apply for black Americans born into slavery. Slaves were not considered American citizens and laws at the time prevented them from applying for or holding property, including patents. In 1857, the U.S. commissioner of patents officially ruled that slave inventions couldn’t be patented.

Slaves’ inventions exploited by owners

During the 17th and 18th centuries, America was experiencing rapid economic growth. Black inventors were major contributors during this era – even though most did not obtain any of the benefits associated with their inventions since they could not receive patent protection.

Slave owners often took credit for their slaves’ inventions. In one well-documented case, a black inventor named Ned invented an effective, innovative cotton scraper. His slave master, Oscar Stewart, attempted to patent the invention. Because Stewart was not the actual inventor, and because the actual inventor was born into slavery, the application was rejected.

Stewart ultimately began selling the cotton scraper without the benefit of patent protection and made a significant amount of money doing so. In his advertisements, he openly touted that the product was “the invention of a Negro slave – thus giving the lie to the abolition cry that slavery dwarfs the mind of the Negro. When did a free Negro ever invent anything?”

Reaping benefits of own inventions

The answer to this question is that black people – both free and enslaved – invented many things during that time period.

One such innovator was Henry Boyd, who was born into slavery in Kentucky in 1802. After purchasing his own freedom in 1826, Boyd invented a corded bed created with wooden rails connected to the headboard and footboard.

The “Boyd Bedstead” was so popular that historian Carter G. Woodson profiled his success in the iconic book “The Mis-education of the Negro,” noting that Boyd’s business ultimately employed 25 white and black employees.

Though Boyd had recently purchased his freedom and should have been allowed a patent for his invention, the racist realities of the time apparently led him to believe that he wouldn’t be able to patent his invention. He ultimately decided to partner with a white craftsman, allowing his partner to apply for and receive a patent for the bed.

Some black inventors achieved financial success but no patent protection, direct or indirect. Benjamin Montgomery, who was born into slavery in 1819, invented a steamboat propeller designed for shallow waters in the 1850s. This invention was of particular value because, during that time, steamboats delivered food and other necessities through often-shallow waterways connecting settlements. If the boats got stuck, life-sustaining supplies would be delayed for days or weeks.

Montgomery tried to apply for a patent. The application was rejected due to his status as a slave. Montgomery’s owners tried to take credit for the propeller invention and patent it themselves, but the patent office also rejected their application because they were not the true inventors.

Benjamin Montgomery succeeded despite being refused a patent.

Even without patent protection, Montgomery amassed significant wealth and become one of the wealthiest planters in Mississippi after the Civil War ended. Eventually his son, Isaiah, was able to purchase more than 800 acres of land and found the town of Mound Bayou, Mississippi after his father’s death.

A legacy of black innovators

The patent system was ostensibly open to free black people. From Thomas Jennings, the first black patent holder, who invented dry cleaning in 1821, to Norbert Rillieux, a free man who invented a revolutionary sugar-refining process in the 1840s, to Elijah McCoy, who obtained 57 patents over his lifetime, those with access to the patent system invented items that still touch the lives of people today.

This legacy extends through the 21st century. Lonnie Johnson generated more than US$1 billion in sales with his Super Soaker water gun invention, which has consistently been among the world’s top 20 best-selling toys each year since 1991. Johnson now owns more than 80 patents and has since developed different green technologies.

Bishop Curry V, a 10-year-old black inventor from Texas, has already applied for a patent for his invention, which he says will stop accidental deaths of children in hot cars.

Black women are also furthering the legacy of black inventors. Lisa Ascolese, known as “The Inventress,” has received multiple patents and founded the Association for Women Inventors and Entrepreneurs. Janet Emerson Bashen became the first black woman to receive a patent for a software invention in 2006. And Dr. Hadiyah Green recently won a $1 million grant related to an invention that may help treat cancer.

True to the legacy of American innovation, today’s black inventors are following in the footsteps of those who came before them. Now patent law doesn’t actively exclude them from protecting their inventions – and fully contributing to American progress.

The Conversation

Shontavia Johnson, Professor of Intellectual Property Law, Drake University

Photo Credit: Resources for History Teachers

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

Who counts as black?

For generations, intimacy between black men and white women was taboo. A mere accusation of impropriety could lead to a lynching, and interracial marriage was illegal in a number of states.

Everything changed with the 1967 Supreme Court decision Loving v. Virginia, which ruled that blacks and whites have a legal right to intermarry. Spurred by the court’s decision, the number of interracial marriages – and, with it, the population of multiracial people – has exploded. According to the 2000 Census, 6.8 million Americans identified as multiracial. By 2010, that number grew to 9 million people. And this leaves out all of the people who might be a product of mixed ancestry but chose to still identify as either white or black.

With these demographic changes, traditional notions of black identity – once limited to the confines of dark skin or kinky hair – are no longer so.

Mixed-race African-Americans can have naturally green eyes (like the singer Rihanna) or naturally blue eyes (like actor Jessie Williams). Their hair can be styled long and wavy (Alicia Keys) or into a bob-cut (Halle Berry).

And unlike in the past – when many mixed-race people would try to do what they could to pass as white – many multiracial Americans today unabashedly embrace and celebrate their blackness.

However, these expressions of black pride have been met with grumbles by some in the black community. These mixed-race people, some argue, are not “black enough” – their skin isn’t dark enough, their hair not kinky enough. And thus they do not “count” as black. African-American presidential candidate Ben Carson even claimed President Obama couldn’t understand “the experience of black Americans” because he was “raised white.”

This debate over “who counts” has created somewhat of an identity crisis in the black community, exposing a divide between those who think being black should be based on physical looks and those who think being black is more than looks.

‘Dark Girls’ and ‘Light Girls’

In 2011 Oprah Winfrey hosted a documentary titled “Dark Girls,” a portrayal of the pain and suffering dark-skinned black women experience.

It’s a story I know only too well. In 1992, I coauthored a book with DePaul psychologist Midge Wilson and business executive Kathy Russell called “The Color Complex,” which looked at the relationship between black identity and skin color in modern America.

The trailer for ‘Dark Girls.’

As someone who has studied the issue of skin color and black identity for over 20 years, I felt uneasy after I finished watching the “Dark Girls” film. No doubt it confirmed the pain that dark-skinned black women feel. But it left something important out, and I wondered if it would lead to misconceptions.

The film seemed to suggest that if you are black, you have dark skin. Your hair is kinky. Green or blue eyes, on the other hand, represent someone who is white.

I was relieved, then, when I was asked to consult on a second documentary, “Light Girls,” in 2015, a film centered on the pain and suffering mixed-race black women endure. The subjects who were interviewed shared their stories. These women considered themselves black but said they always felt out of place, on the outside looking in. Black men often adored them, but this could quickly flip to scorn if their advances were spurned. Meanwhile, friendships with darker-skinned black women could be fraught. Insults such as “light-bright,” “mello-yellow” and “banana girl” were tossed at lighter-skinned black women, objectifying them as anything but black.

Identity experts weigh in

Some of the experts on identity take issue with the general assumptions many might have about “who is black,” especially those who think blackness is determined by skin color.

For example, in 1902 sociologist Charles Horton Cooley argued that identity is like a “looking glass self.” In other words, we are a reflection of the people around us. Mixed-race, light-skinned, green-eyed African-Americans born and raised in a black environment are no less black than their dark-skinned counterparts. In 1934, cultural anthropologist Margaret Mead said that identity was a product of our social interactions, just like Cooley.

Maybe the most well-known identity theorist is psychologist Erik Erikson. In his most popular book, “Identity: Youth and Crisis,” published in 1968, Erikson also claimed that identity is a product of our environment. But he expanded the theory a bit: It includes not only the people we interact with but also the clothes we wear, the food we eat and the music we listen to. Mixed-race African-Americans – just like dark-skinned African-Americans – would be equally uncomfortable wearing a kimono, drinking sake or listening to ongaku (a type of Japanese music). On the other hand, wearing a dashiki, eating soul food and relaxing to the beats of rap or hip-hop music is something all black people – regardless of skin tone – can identify with.

Our physical features, of course, are a product of our parents. Indeed, in the not-too-distant future, with more and more interracial marriages taking place, we may find black and white hair texture and eye and skin color indistinguishable. It’s worth noting that there’s an element of personal choice involved in racial identity – for example, you can choose how to self-identify on the census. Many multiracial Americans simply identify as “multiracial.” Others, even if they’re a product of mixed ancestry, choose “black.”

Perhaps true blackness, then, dwells not in skin color, eye color or hair texture, but in the love for the spirit and culture of all who came before us.

The Conversation

Ronald Hall, Professor of Social Work, Michigan State University

Photo Credit: Wikipedia

This article was originally published on The Conversation. Read the original article.

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist

Buy Now Button

Why do conservatives want the government to defund the arts?

Recent reports indicate that Trump administration officials have circulated plans to defund the National Endowment of the Arts (NEA), putting this agency on the chopping block – again.

Conservatives have sought to eliminate the NEA since the Reagan administration. In the past, arguments were limited to the content of specific state-sponsored works that were deemed offensive or immoral – an offshoot of the culture wars.

Now the cuts are largely driven by an ideology to shrink the federal government and decentralize power. The Heritage Foundation, a conservative think tank, argues that government should not use its “coercive power of taxation” to fund arts and humanities programs that are neither “necessary nor prudent.” The federal government, in other words, has no business supporting culture. Period.

But there are two major flaws in conservatives’ latest attack on the NEA: The aim to decentralize the government could end up dealing local communities a major blow, and it ignores the economic contribution of this tiny line item expense.

The relationship between government and the arts

Historically, the relationship between the state and culture is as fundamental as the idea of the state itself. The West, in particular, has witnessed an evolution from royal and religious patronage of the arts to a diverse range of arts funding that includes sales, private donors, foundations, corporations, endowments and the government.

Prior to the formation of the NEA in 1965, the federal government strategically funded cultural projects of national interest. For example, the Commerce Department subsidized the film industry in the 1920s and helped Walt Disney skirt bankruptcy during World War II. The same could be said for the broad range of New Deal economic relief programs, like the Public Works of Art Project and the Works Progress Administration, which employed artists and cultural workers. The CIA even joined in, funding Abstract Expressionist artists as a cultural counterweight to Soviet Realism during the Cold War.

The NEA came about during the Cold War. In 1963, President John F. Kennedy asserted the political and ideological importance of artists as critical thinkers, provocateurs and powerful contributors to the strength of a democratic society. His attitude was part of a broader bipartisan movement to form a national entity to promote American arts and culture at home and abroad. By 1965, President Johnson took up Kennedy’s legacy, signing the National Arts and Cultural Development Act of 1964 – which established the National Council on the Arts – and the National Foundation on the Arts and Humanities Act of 1965, which established the NEA.

Since its inception, the NEA has weathered criticism from the left and right. The right generally argues state funding for culture shouldn’t be the government’s business, while some on the left have expressed concern about how the funding might come with constraints on creative freedoms. Despite complaints from both sides, the United States has never had a fully articulated, coherent national policy on culture, unless – as historian Michael Kammen suggests – deciding not to have one is, in fact, policy.

Flare-ups in the culture wars

Targeting of the NEA has had more to do with the kind of art the government funded than any discernible impact to the budget. The amount in question – roughly US$148 million – is a drop in the morass of a $3.9 trillion federal budget.

Instead, the arts were a focus of the culture wars that erupted in the 1980s, which often invoked legislative grandstanding for elimination of the NEA. Hot-button NEA-funded pieces included Andre Serrano’s “Immersion (Piss Christ)” (1987), Robert Mapplethorpe’s photo exhibit “The Perfect Moment” (1989) and the case of the “NEA Four,” which involved the rejection of NEA grant applicants by performance artists Karen Finley, Tim Miller, John Fleck and Holly Hughes.

In each case, conservative legislators isolated an artist’s work – connected to NEA funding – that was objectionable due to its sexual or controversial content, such as Serrano’s use of Christian iconography. These artists’ works, then, were used to stoke a public debate about normative values. Artists were the targets, but often museum staff and curators bore the brunt of these assaults. The NEA four were significant because the artists had grants unlawfully rejected based upon standards of decency that were eventually deemed unconstitutional by the Supreme Court in 1998.

As recently as 2011, former Congressmen John Boehner and Eric Cantor targeted the inclusion of David Wojnarowicz’s “A Fire in My Belly, A Work in Progress” (1986-87) in a Smithsonian exhibition to renew calls to eliminate the NEA.

In all these cases, the NEA had funded artists who either brought attention to the AIDS crisis (Wojnarowicz), invoked religious freedoms (Serrano) or explored feminist and LGBTQ issues (Mapplethorpe and the four performance artists). Controversial artists push the boundaries of what art does, not just what art is; in these cases, the artists were able to powerfully communicate social and political issues that elicited the particular ire of conservatives.

A local impact

But today, it’s not about the art itself. It’s about limiting the scope and size of the federal government. And that ideological push presents real threats to our economy and our communities.

Organizations like the Heritage Foundation fail to take into account that eliminating the NEA actually causes the collapse of a vast network of regionally controlled, state-level arts agencies and local councils. In other words, they won’t simply be defunding a centralized bureaucracy that dictates elite culture from the sequestered halls of Washington, D.C. The NEA is required by law to distribute 40 percent of its budget to arts agencies in all 50 states and six U.S. jurisdictions.

Many communities – such as Princeton, New Jersey, which could lose funding to local cultural institutions like the McCarter Theatre – are anxious about how threats to the NEA will affect their community.

Therein lies the misguided logic of the argument for defunding: It targets the NEA but in effect threatens funding for programs like the Creede Repertory Theatre – which serves rural and underserved communities in states like Colorado, New Mexico, Utah, Oklahoma and Arizona – and Appalshop, a community radio station and media center that creates public art installations and multimedia tours in Jenkins, Kentucky to celebrate Appalachian cultural identity.

While the present administration and the conservative movement claim they’re simply trying to save taxpayer dollars, they also ignore the significant economic impacts of the arts. The Bureau of Economic Analysis reported that the arts and culture industry generated $704.8 billion of economic activity in 2013 and employed nearly five million people. For every dollar of NEA funding, there are seven dollars of funding from other private and public funds. Elimination of the agency endangers this economic vitality.

Ultimately, the Trump administration needs to decide whether artistic and cultural work is important to a thriving economy and democracy.

The Conversation

Aaron D. Knochel, Assistant Professor of Art Education, Pennsylvania State University

You can follow The Systems Scientist on Twitter or Facebook

Photo credit: Wikimedia Commons

Donate to The Systems Scientist

Buy Now Button

 

 

 

This article was originally published on The Conversation. Read the original article.

Allison Davis: Forgotten black scholar studied – and faced – structural racism in 1940s America

When black historian Carter G. Woodson founded Negro History Week in 1926 (expanded to Black History Month in 1976), the prevailing sentiment was that black people had no history. They were little more than the hewers of wood and the drawers of water who, in their insistence upon even basic political rights, comprised an alarming “Negro problem.”

To combat such ignorance and prejudice, Woodson worked relentlessly to compile the rich history of black people. He especially liked to emphasize the role of exceptional African-Americans who made major contributions to American life. At the time, that was a radical idea.

W. Allison Davis (1902-1983) came of age in the generation after Woodson, but he was precisely the type of exceptional black person whom Woodson liked to uphold as evidence of black intelligence, civility, and achievement.

Davis was an accomplished anthropologist and a trailblazer who was the first African-American appointed full-time to the faculty of a predominantly white university – the University of Chicago in 1942. But Davis has faded from popular memory. In my forthcoming book “The Lost Black Scholar: Resurrecting Allison Davis in American Social Thought, 1902-1983,” I make the case that he belongs within the pantheon of illustrious African-American – and simply, American – pioneers.

Allison Davis, forgotten pioneer

Allison and Elizabeth Davis in New Haven, Connecticut, in 1939.
Courtesy of the Davis family.

Allison Davis and his wife Elizabeth Stubbs Davis were among the first black anthropologists in the country. Bringing their experiences on the wrong side of the color line to mainstream social science, they made landmark contributions to their field, including “Deep South” (1941) and “Children of Bondage” (1940). Those books sold tens of thousands of copies in the middle decades of the 20th century; they advanced social theory by explaining how race and class functioned as interlocking systems of oppression; and they broke methodological ground in combining ethnography with psychological assessments rarely applied in those days.

Allison Davis’ extensive body of research also had a real impact on social policy. It influenced the proceedings in Brown v. Board of Education (1954), undergirded the success of the federal Head Start program, and prompted school districts all across the country to revise or reject intelligence tests, which Davis had proven to be culturally biased. His “Social-Class Influences Upon Learning” (1948) made the most compelling case of that era that intelligence tests discriminated against lower-class people.

Despite the very real advances that Davis helped to inspire within American education in the 20th century, today those same accomplishments are at risk. American schools remain as racially segregated as ever due to poverty and discriminatory public policies. The investment in public education, especially compensatory programs such as Head Start, looks to further diminish amid the growing support for privatization, embodied in Betsy DeVos’ recent confirmation as secretary of education. If we are to understand the nature of these issues today, we must understand their history, which Davis’ career helps to illuminate.

Davis’ scholarly contributions are unquestionable when considered now, many decades later. But as the problems above suggest, it is no longer enough to simply celebrate exceptional African-American pioneers like Davis, or just give lip service to their ideas. The next step is confronting the circumstances that constrained their lives. This means viewing their experiences in relation to the structural racism that has shaped American life since colonial times.

Bending – not breaking – academic color line

Consider Davis’ landmark appointment to the University of Chicago. Fitting the story into a master narrative of racial progress obscures more than it reveals. While the appointment did represent the crossing of a racial boundary and heralded the many more barriers that would be challenged in the ensuing decades, a closer look at the story gives little reason to celebrate.

Like all black scholars of his time, Davis had to be twice as good to get half as much as his fellow white male scholars (and the situation was far worse for black women scholars like Elizabeth Stubbs Davis). Only through compiling a truly remarkable record of achievement, and only amid the national fervor to make the U.S. the “arsenal of democracy” during World War II, would Chicago even consider appointing Allison Davis. Even then, he only received a three-year contract on the condition that the Julius Rosenwald Foundation (JRF) agree to subsidize most of his salary.

Even with the subsidy, certain university faculty members, such as Georgia-born sociologist William Fielding Ogburn, actively opposed the appointment on racist grounds. So, too, did some trustees at the JRF, including the wealthy New Orleans philanthropist Edgar B. Stern, who attempted to sabotage the grant. Discounting Davis’ accomplishments and implying instead a sort of reverse racism, Stern asserted that “the purpose of this move is to have Davis join the Chicago Faculty, not in spite of the fact that he is a Negro but because he is a Negro.” Similarly, myopic charges have been a staple of criticism against affirmative actions programs in more recent times.

The opposition ultimately failed to torpedo Davis’ appointment, but it did underscore the type of environment he would face at Chicago. As faculty members openly debated if he should even be allowed to instruct the university’s mainly white students, the administration barred him from the Quadrangle Club, where faculty regularly gathered and ate lunch. In a private letter to him, the university made clear that it “cannot assume responsibility for Mr. Davis’ personal happiness and his social treatment.”

As time wore on, such overt racism did begin to ebb, or at least confine itself to more private quarters. What never did subside, though, was an equally pernicious institutional racism that marginalized Davis’ accomplishments and rendered him professionally invisible.

As Davis collaborated with renowned white scholars at Chicago, his contributions were submerged under theirs – even when he was the first author and chief theorist of the work. When Daniel Patrick Moynihan, writing for Commentary magazine in 1968, failed to count Davis among his list of black scholars who studied black poverty (even though Davis was among the most prolific black scholars in that area), he registered the depth of Davis’ marginalization. Such marginalization, which stemmed also from Davis’ interdisciplinary approach and iconoclasm, has caused even historians to lose track of him and his important career.

Even the most exceptional African-Americans have never been able to transcend the racial system that ensnares them. Davis’ appointment did not usher in a new era of integration of faculties at predominantly white universities. It took another three decades for substantial numbers of black scholars to begin receiving offers of full-time, tenure-track employment. And because of the vastly disproportionate rates of poverty, incarceration and municipal neglect plaguing the black community, jobs in higher education often continued – and still continue – to be out of reach.

Davis was ensnared by the racism he studied

Few people better understood, or more thoughtfully analyzed, these very realities than did Allison Davis. This was a man who laid bare the systems of race and class that govern American life. He understood that education needed to be a bulwark for democracy, not merely a ladder for individual social mobility. He embodied how to confront injustice with sustained, productive resistance. Moreover, this was a man who refused to surrender to despair, and who chose to dedicate his life to making the country a better, more equal, more democratic place.

So as we pause to celebrate Black History Month, let us look seriously at the lives of forgotten pioneers such as Allison Davis. We should take joy in and marvel at their individual accomplishments, but never lose sight of the structural racism that delimited their lives, and that continues to plague American society today.

The Conversation

David Varel, Postdoctoral Fellow in African-American Studies, Case Western Reserve University

Photo Credit: The Davis family

You can follow The Systems Scientist on Twitter or Facebook.

Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.

Robot rights: at what point should an intelligent machine be considered a ‘person’?

Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalities. Yet we also tend to think of robots as property, and as lacking the kind of rights that we reserve for people.

But if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?

What if a robot achieves true self-awareness? Should it have equal rights with us and the same protection under the law, or at least something similar?

These are some of the issues being discussed by the European Parliament’s Committee on Legal Affairs. Last year it released a draft report and motion calling for a set of civil law rules on robotics regulating their manufacture, use, autonomy and impact upon society.

Of the legal solutions proposed, perhaps most interesting was the suggestion of creating a legal status of “electronic persons” for the most sophisticated robots.

Approaching personhood

The report acknowledged that improvements in the autonomous and cognitive abilities of robots makes them more than simple tools, and makes ordinary rules on liability, such as contractual and tort liability, insufficient for handling them.

For example, the current EU directive on liability for harm by robots only covers foreseeable damage caused by manufacturing defects. In these cases, the manufacturer is responsible. However, when robots are able to learn and adapt to their environment in unpredictable ways, it’s harder for a manufacturer to foresee problems that could cause harm.

The report also questions about whether or not sufficiently sophisticated robots should be regarded as natural persons, legal persons (like corporations), animals or objects. Rather than lumping them into an existing category, it proposes that a new category of “electronic person” is more appropriate.

The report does not advocate immediate legislative action, though. Instead, it proposes that legislation be updated if robots become more complex; if and when they develop more behavioral sophistication. If this occurs, one recommendation is to reduce the liability of “creators” proportional to the autonomy of the robot, and that a compulsory “no-fault” liability insurance could cover the shortfall.

But why go so far as to create a new category of “electronic persons”? After all, computers still have a long way to go before they match human intelligence, if they ever do.

But it can be agreed that robots – or more precisely the software that controls them – is becoming increasingly complex. Autonomous (or “emergent”) machines are becoming more common. There are ongoing discussions about the legal liability for autonomous vehicles, or whether we might be able to sue robotic surgeons.

These are not complicated problems as long as liability rests with the manufacturers. But what if manufacturers cannot be easily identified, such as if open source software is used by autonomous vehicles? Whom do you sue when there are millions of “creators” all over the world?

Artificial intelligence is also starting to live up to its moniker. Alan Turing, the father of modern computing, proposed a test in which a computer is considered “intelligent” if it fools humans into believing that the computer is human by its responses to questions. Already there are machines that are getting close to passing this test.


MIT’s artificial intelligence is able to synthesize sounds to video in a very believable way.

There are also other incredible successes, such as the computer that creates soundtracks to videos that are indistinguishable from natural sounds, the robot that can beat CAPTCHA, one that can create handwriting indistinguishable from human handwriting and the AI that recently beat some of the world’s best poker players.

Robots may eventually match human cognitive abilities and they are becoming increasingly human-like, including the ability to “feel” pain.


A robot being taught to ‘feel’ pain.

If this progress continues, it may not be long before self-aware robots are not just a product of fantastic speculation.

The EU report is among the first to formally consider these issues, but other countries are also engaging. Peking University’s Yueh-Hsuan Weng writes that Japan and South Korea expect us to live in a human-robot coexistence by 2030. Japan’s Ministry of Economy, Trade and Industry has created a series of robot guidelines addressing business and safety issues for next generation robots.

Electronic persons

If we did give robots some kind of legal status, what would it be? If they behaved like humans we could treat them like legal subjects rather than legal objects, or at least something in between. Legal subjects have rights and duties, and this gives them legal “personhood”. They do not have to be physical persons; a corporation is not a physical person but is recognized as a legal subject. Legal objects, on the other hand, do not have rights or duties although they may have economic value.

Assigning rights and duties to an inanimate object or software program independent of their creators may seem strange. However, with corporations we already see extensive rights and obligations given to fictitious legal entities.

Perhaps the approach to robots could be similar to that of corporations? The robot (or software program), if sufficiently sophisticated or if satisfying certain requirements, could be given similar rights to a corporation. This would allow it to earn money, pay taxes, own assets and sue or be sued independently of its creators. Its creators could, like directors of corporations, have rights or duties to the robot and to others with whom the robot interacts.

Robots would still have to be partly treated as legal objects since, unlike corporations, they may have physical bodies. The “electronic person” could thus be a combination of both a legal subject and a legal object.

The European Parliament will vote on the resolution this month. Regardless of the result, reconsidering robots and the law is inevitable and will require complex legal, computer science, and insurance research.

The Conversation

Kyle Bowyer, Lecturer, Curtin Law School, Curtin University

Photo Credit: Fiuxy.net

You can follow The Systems Scientist on Twitter or Facebook.


Donate to The Systems Scientist

Buy Now Button

This article was originally published on The Conversation. Read the original article.