Popular shared stories on NewsBlur.
2027 stories
·
39310 followers

Germany's Bavaria to ban full-face veil

1 Comment
Minister says facial expressions are key to communication, but critics say a ban is largely symbolic.
Read the whole story
acdha
1 hour ago
reply
… and nothing about the men who commit the vast majority of offences. They may claim different faiths but conservative men do seem unified by the desire for government regulations of womens’ dress.
Washington, DC
Share this story
Delete

Rule by Nobody

1 Comment

The compensation for a death sentence is knowledge of the exact hour when one is to die.
—Cincinnatus C., Invitation to a Beheading (Vladimir Nabokov, 1935)

 

Decision-making algorithms are everywhere, sorting us, judging us, and making critical decisions about us without our having much direct influence in the process. Political campaigns use them to decide where (and where not) to campaign. Social media platforms and search engines use them to figure out which posts and links to show us and in what order, and to target ads. Retailers use them to price items dynamically and recommend items they think you’ll be more likely to consume. News sites use them to sort content. The finance industry — from your credit score to the bots that high-frequency traders use to capitalize on news stories and tweets — is dominated by algorithms. Even dating is increasingly algorithmic, enacting a kind of de facto eugenics program for the cohort that relies on such services.

For all their ubiquity, these algorithms are paradoxical at their heart. They are designed to improve on human decision-making by supposedly removing its biases and limitations, but the inevitably reductive analytical protocols they implement are often just as vulnerable to misuse. Decision-making algorithms replace humans with simplified models of human thought processes that can reify rather than mitigate the biases those programmers are working from in conceptualizing the algorithm’s intent.

Cathy O’Neil, in her recent book Weapons of Math Destruction, defines algorithms as “opinions formalized in code.” This deceptively simple appraisal radically undercuts the common view of algorithms as neutral and objective. And even if programmers were capable of correcting against their own biases, the machine-learning components of many algorithms makes their workings mysterious, sometimes even to programmers themselves, as Frank Pasquale describes in another recent book, The Black Box Society.

Algorithms can never have “enough”

In the complexity of their code and the size of the data troves they can process, these kinds of algorithms can seem unprecedented, constituting an entirely new kind of social threat. But the aims they are designed to meet are not new. The logic of how these algorithms have been applied follows from the longstanding ideals of bureaucracies generally: that is, they are presumed to concentrate power in well-ordered and consistent structures. In theory, anyway. In practice, bureaucracies tend toward inscrutable unaccountability, much as algorithms do. By framing algorithms as an extension of familiar bureaucratic principles, we can draw from the history of the critique of bureaucracy to help further unpack algorithms’ dangers. Like formalized bureaucracy, algorithms may make overtures toward transparency, but tend toward an opacity that reinforces extant social injustices.

In the early 20th century, sociologist Max Weber outlined the essence of pure bureaucracies. Like algorithms, bureaucratic processes are built on the assumption that individual human judgment is too limited, subjective, and unreliable, deficiencies that lead to nepotism, prejudice, and inefficiency. To combat that, an ideal bureaucracy, according to Weber, has a clear purpose, explicit written rules of conduct, and a merit-based hierarchy of career employees. This structure places power in the apparatus and allows bureaucracies to function consistently regardless of who occupies different roles, but this same impersonality makes them controllable by anyone who can seize their higher offices. Also, because the apparatus itself generates the power, bureaucrats have incentive to serve that apparatus and preserve it even when it veers from its original intended function. This creates a strong tendency within bureaucracies to entrench themselves regardless of who directs them.

The way algorithms are implemented can mimic these bureaucratic tendencies. Google’s search algorithm, for example, appears to have a clear, limited purpose — to return the most relevant search results and most lucrative ads — and operates within a growing but defined space. As the company’s engineers come and go, ascend through the company hierarchy or leave it entirely, the algorithm itself persists and evolves. The intent of the algorithm was once to organize the world’s information, but as it has become a commonplace way of finding information, information has been reshaped in the algorithm’s image, as is most obvious with search-engine optimization. This effectively entrenches the algorithm at the expense of the world’s diversity of information.

Both bureaucracies and algorithms are ostensibly committed to transparency but become progressively more obscure in the name of guarding their functionality. That is, the systematicity of both make them susceptible to being “gamed”; Google and Facebook justify the secrecy of their sorting algorithms as necessary to thwarting subversive actors. Weber notes that bureaucracies too tend to become increasingly complex over time while simultaneously becoming increasingly opaque. Each trend makes the other more intractable. “Once fully established, bureaucracy is among those social structures which are hardest to destroy,” Weber warns. In bureaucracies, over time, only those “in the know” can effectively navigate the encrusted processes to their own benefit. “The superiority of the professional insider every bureaucracy seeks further to increase through the means of keeping secret its knowledge and intentions,” he writes. “Bureaucratic administration always tends to exclude the public, to hide its knowledge and action from criticism as well as it can.” This makes bureaucracies appear impervious to outside criticism and amendment.

But as O’Neil argues about algorithms, “You don’t need to understand all the details of a system to know that it has failed.” The problem with both algorithms and bureaucracies is that they try to set themselves up to be failure-proof. Bad algorithms and bureaucracies have a built-in defense mechanism in their incomprehensible structure. Engineers are often the only people who can understand or even see the code; career bureaucrats are the only people who understand the inner workings of the system. Since no one else can identify the specific reasons for problems, any failure can be interpreted as a sign that the system needs to be given more power to produce better outcomes. And what constitutes a better outcome remains in the control of those implementing the algorithms, and is defined in terms of what the algorithm can process.

As Weber wrote, “The consequences of bureaucracy depend upon the direction which the powers using the apparatus give it. Very frequently a crypto-plutocratic distribution of power has been the result.” Likewise with algorithms: If a company’s algorithm increases its bottom line, for example, its social ramifications may become irrelevant externalities. If a recidivism model’s goal is to lower crime, the fairness or appropriateness of the prisons sentences it produces don’t matter as long as the crime rate declines. If a social media platform’s goal is to maximize “engagement,” then it can be considered successful regardless of the veracity of the news stories or intensity of the harassment that takes place there, so long as users continue clicking and commenting.

Though automated systems purport to avert discrimination, Pasquale writes, “software engineers construct the datasets mined by scoring systems; they define the parameters of data-mining analyses; they create the clusters, links, and decision trees applied; they generate the predictive models applied. Human biases and values are embedded into each and every step of development. Computerization may simply drive discrimination upstream.” O’Neil offers a similar argument: “Models are constructed not just from data but from choices we make about which data to pay attention to — and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral. If we back away from them and treat mathematical models as a neutral and inevitable force, like the weather or the tides, we abdicate our responsibility.”

For bad algorithms and bureaucracies, any failure can be interpreted as a sign that the system needs more power to produce better outcomes

Far from an unintended consequence, however, that abdication becomes the whole point, even if algorithms and bureaucracies are frequently born with benevolent aims in mind. For the proprietors of these algorithms, this abdication is translated into a fervor for objective purity, as if neutrality in and of itself is always undisputable aim. The intent of algorithms is presented as always self-evident (be neutral and thus fair) rather than a matter of negotiation and implementation. The means and ends become disconnected; objectivity becomes a front, a way of certifying outcomes regardless of whether or not they constitute social improvements. Thus the focus on combatting human bias leads directly to means for cloaking and dissipating human responsibility, merely making human bias harder to detect. Efforts to be more fair end up being a temptation or justification for opacity, greasing the tracks for an uneven allocation of rewards and penalties, exacerbating existing inequalities at any turn.

In On Violence, Hannah Arendt characterizes bureaucracy as “the rule of an intricate system of bureaus in which no men, neither one nor the best, neither the few nor the many, can be held responsible, and which could be properly called rule by Nobody.” Left unchecked, bureaucracy enables an unwitting conspiracy to carry out deeds that no individual would endorse but in which all are ultimately complicit. Corporations can pursue profit without consideration for effects on the environment or human lives. Violence becomes easier at the state level. And anti-state violence, without specific targets to aim for, shifts from strategic, logical action to incomprehensible, more terroristic expressions of rage. “The greater the bureaucratization of public life, the greater will be the attraction of violence,” Arendt argues. “In a fully developed bureaucracy there is nobody left with whom one could argue, to whom one could present grievances, on whom the pressures of power could be exerted.” It would, of course, be difficult to “attack” an algorithm, to make it feel shame or guilt, to persuade it that it is wrong.


In a capitalist society, the desire to remove human biases from decision-making processes is part of the overarching pursuit of efficiency and optimization, the rationalization Weber described as an “iron cage.” Algorithms may be sold as reducing bias, but their chief aim is to afford profit, power, and control. Fairness is the alibi for the way algorithmic systems reduce human subjects to only the attributes expressible as data, which makes us easier to monitor, manipulate, sell to, and exploit. They transfer risk from their operators to those caught up within their gears. So even when algorithms are working well, they are not working at all for us.

It’s obvious that algorithms with inaccurate data can be harmful to someone trying to get a job, a loan, or an apartment, and Pasquale and O’Neil trace out the many ramifications of this. Even if you can figure out when data brokers have inaccurate data about you, it is very difficult to get them to change it, and by the time they do, the bad data may have been passed along to countless different brokers, cascading exponentially through an interlocking system of algorithmic governance. Many algorithmic systems also use questionable proxies in place of traits that are impossible to quantify or illegal to track or sort by. Some, for instance, use ZIP codes as a proxy for race.

As with bureaucracies, algorithms purport to gain fairness by measuring only what can be measured fairly, leaving out anything prone to judgment calls, but in actuality this leaves a lot of leeway for those who have inside information or connections that can help them navigate the byzantine processes, and massage their data.

More precise and accurate data can’t fix a bad system. Even though the data may be accurate, the systems may lack the proper context for that data that situates its systemic implications. Pasquale summarizes how this occurs in lending: “Subtle but persistent racism, arising out of implicit bias or other factors, may have influenced past terms of credit, and it’s much harder to keep up on a loan at 15 percent interest than one at five percent. Late payments will be more likely, and then will be fed into present credit scoring models as neutral, objective, non-racial indicia of reliability and creditworthiness.”

Often these systems create feedback loops that worsen what they purport to measure objectively. Consider a credit rating that factors in your ZIP code. If your neighbors are bad about paying their bills, your score will go down. Your interest rates go up, making it harder to pay back loans and increasing the likelihood that you miss a payment or default. That lowers your score further, along with those of your neighbors. And so on. The algorithm is prescriptive, though the banks issuing loans view it as merely predictive.

No matter how much good data you have, there will always exist additional context, in the form of additional data that could improve it. There is no limit to reach that will confer objectivity, that will render results beyond being subject to interpretation. Algorithms can never have “enough.”

The need to optimize yourself for a network of opaque algorithms induces a sort of existential torture. In The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy, anthropologist David Graeber suggests a fundamental law of power dynamics: “Those on the bottom of the heap have to spend a great deal of imaginative energy trying to understand the social dynamics that surround them — including having to imagine the perspectives of those on top — while the latter can wander about largely oblivious to much of what is going on around them. That is, the powerless not only end up doing most of the actual, physical labor required to keep society running, they also do most of the interpretive labor as well.” This dynamic, Graeber argues, is built into all bureaucratic structures. He describes bureaucracies as “ways of organizing stupidity” — that is, of managing and reproducing these “extremely unequal structures of imagination” in which the powerful can disregard the perspectives of those beneath them in various social and economic hierarchies. Employees need to anticipate the needs of bosses; bosses need not reciprocate. People of color are forced to learn to accommodate and anticipate the ignorance and hostility of white people. Women need to be acutely aware of men’s intentions and feelings. And so on. Even benevolent-seeming bureaucracies, in Graeber’s view, have the effect of reinforcing “the highly schematized, minimal, blinkered perspectives typical of the powerful” and their privileges of ignorance and indifference toward those positioned as below them.

Fairness is the alibi for reducing human subjects to attributes only expressible as data, which makes us easier to exploit. Algorithms transfer risk from their operators to those caught up within their gears

This helps explain why bureaucrats and software engineers have little incentive to understand the people governed by their systems, while the governed must expend precious intellectual capital trying to reverse-engineer these systems to survive within them. It’s a losing battle, of course: Navigating the world effectively may require more and more awareness and interpretation of algorithmic systems, but in many cases the more we know, the more likely our knowledge is to become obsolete. The institutions that run these systems tend to treat our reverse-engineering them as inappropriately learning how to game them, and they can change them unilaterally. As Goodhart’s law states, when a measure becomes a target, it ceases to become a useful measure. The moment that more than a few people understand how an algorithm works, its engineers will modify it, lest it lose its power.

So we must simultaneously understand how these systems work in a general sense and behave the way they want us to, but also stop short of any behavior that could be seen as gaming them. We know our actions are recorded, but not necessarily by whom. We know we are judged, but not how. Our lives and opportunities are altered accordingly but invisibly. We are forced to figure out not only how to adapt to the best of our abilities but what it is that even happened to us.

Unfortunately, there’s not much an individual can do. It’s undeniable that individuals have been harmed by algorithms yet nearly impossible for any of those victims to prove it on an individual basis and demonstrate legal standing. O’Neil and Pasquale both note that the problems with algorithms are too extensive for any silver-bullet solution, offering instead a laundry list of approaches drawing from precedents in U.S. policy (e.g. the Fair Credit Reporting Act and the Health Insurance Portability and Accountability Act) and European legal codes. But regulatory means of reigning in algorithms — even assuming the significant hurdles of regulatory capture (the government’s understanding of these instruments is informed mostly by their beneficiaries) could be surmounted — would still require labyrinthine bureaucracies to implement them. If the problem with algorithms lies in how they mimic the ways bureaucracies function, trying to fixing them with different bureaucracies merely reiterates the situation.

Algorithms are probably not going anywhere. Technology and bureaucracy both tend toward expansion as they mature. But while getting rid of algorithms seems unlikely, they can be modified toward greater social utility. This would require evaluating them not in terms of how objective they seem, but on ethical, unapologetically subjective grounds. O’Neil argues that algorithms should be judged by the ethical orientation their programmers and users give to them. “Mathematical models can sift through data to locate people who are likely to face great challenges, whether from crime, poverty, or education,” she writes. “It’s up to society whether to use that intelligence to reject and punish them — or to reach out and help them with resources they need.” O’Neil writes of even more promising applications, like an algorithm that scans troves of data for signs of forced labor in international supply chains and another that identifies children at greatest risk for abuse. Crucially, they rely on humans at both ends of the process to make key decisions.

In this paradigm, the problem with “customized” rankings is not their lack of universality but the fact they could be even more customized to suit specific users’ goals. If a platform wishes to be truly neutral, its algorithms must be amenable to the unique objectives of each user. Pasquale suggests that when Google or Yelp or Siri makes a restaurant recommendation, a user could decide whether and how heavily to take into account not just the type of food and the distance to get there, but whether the company provides its workers with health benefits or maternity leave.

Opaque algorithms that rely on Big Data create issues that are commonly brushed aside as collateral damage when they are recognized at all. But those issues are avoidable. By acknowledging and accepting the human bias endemic to these systems, those same forces could be repurposed for good. We need not be trapped in their iron cages.

Read the whole story
tante
4 hours ago
reply
Algorithms as bureaucracy
Oldenburg/Germany
Share this story
Delete

seriouslyamerica: I hate this idea that I’m supposed to meet my...

1 Comment and 9 Shares


seriouslyamerica:

I hate this idea that I’m supposed to meet my opponents in the middle. The middle of what?

Denying my trans friends their humanity?

Debating whether my mothers should have had the right to marry?

Whether my terminally ill neighbor should access healthcare?

Whether my black friends should have to fear for their lives just for existing in their own neighborhoods?

I’m supposed to compromise with people who would send an immigrant child back to certain death in their country of origin?

That’s not respect, or compromise. It’s fascism. It’s violence. And it’s abhorrent.

When you ask fascists to meet you in the middle, you compromise your morals for nothing.

Read the whole story
popular
5 hours ago
reply
Share this story
Delete
1 public comment
moonlit
5 hours ago
reply
Even if they actually would, we can't meet in the middle when it's our existence being threatened.

Pluto scientists are mad as hell and they’re not going to take it anymore

1 Comment and 4 Shares

NASA/JHUAPL/SwRI

It's no secret that Alan Stern and other scientists who led the New Horizons mission were extremely displeased by Pluto's demotion from planet status in 2006 during a general assembly of the International Astronomical Union. They felt the IAU decision undermined the scientific and public value of their dramatic flyby mission to the former ninth planet of the Solar System.

But now the positively peeved Pluto people have a plan. Stern and several colleagues have proposed a new definition for planethood, which they intend to submit for consideration at the next general assembly of the IAU. The final arbiters of astronomical definitions will next gather in Vienna in August 2018.

In technical terms, the proposal redefines planethood by saying, "A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid regardless of its orbital parameters." More simply, the definition can be stated as, “round objects in space that are smaller than stars."

Here's the thing about the new definition—a lot of bodies in the Solar System meet the criteria. Pluto does, of course, but so do many moons, including our own around Earth. There are also dozens of objects discovered in the Kuiper Belt, beyond Pluto's orbit, that meet the definition. In fact, the tally of "planets" under the new definition is now 110 and rising. (Also, Obi-Wan Kenobi would be proven correct. The Death Star would indeed be no moon but rather a planet, too.)

And what of the poor students who have struggled to memorize the eight planets of the Solar System, with sayings such as "My Very Educated Mother Just Served Us Nachos?" Stern and his colleagues counter: "Certainly 110 planets is more than students should be expected to memorize, and indeed they ought not." They also raise a good point, notably that students don't learn science by memorizing things but rather understanding how things work.

"Understanding the natural organization of the Solar System is much more informative than rote memorization," the proposal states. "Teaching the zones of the Solar System from the Sun outward and the types of planets and small bodies in each is perhaps the best approach."

Read the whole story
fxer
5 hours ago
reply
"Counting moons as planets seems strange to me and I imagine would irritate the average layman."
Bend, Oregon
satadru
16 hours ago
reply
New York, NY
Share this story
Delete

Pluto scientists are mad as hell and they’re not going to take it anymore

1 Comment and 4 Shares

NASA/JHUAPL/SwRI

It's no secret that Alan Stern and other scientists who led the New Horizons mission were extremely displeased by Pluto's demotion from planet status in 2006 during a general assembly of the International Astronomical Union. They felt the IAU decision undermined the scientific and public value of their dramatic flyby mission to the former ninth planet of the Solar System.

But now the positively peeved Pluto people have a plan. Stern and several colleagues have proposed a new definition for planethood, which they intend to submit for consideration at the next general assembly of the IAU. The final arbiters of astronomical definitions will next gather in Vienna in August 2018.

In technical terms, the proposal redefines planethood by saying, "A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid regardless of its orbital parameters." More simply, the definition can be stated as, “round objects in space that are smaller than stars."

Here's the thing about the new definition—a lot of bodies in the Solar System meet the criteria. Pluto does, of course, but so do many moons, including our own around Earth. There are also dozens of objects discovered in the Kuiper Belt, beyond Pluto's orbit, that meet the definition. In fact, the tally of "planets" under the new definition is now 110 and rising. (Also, Obi-Wan Kenobi would be proven correct. The Death Star would indeed be no moon but rather a planet, too.)

And what of the poor students who have struggled to memorize the eight planets of the Solar System, with sayings such as "My Very Educated Mother Just Served Us Nachos?" Stern and his colleagues counter: "Certainly 110 planets is more than students should be expected to memorize, and indeed they ought not." They also raise a good point, notably that students don't learn science by memorizing things but rather understanding how things work.

"Understanding the natural organization of the Solar System is much more informative than rote memorization," the proposal states. "Teaching the zones of the Solar System from the Sun outward and the types of planets and small bodies in each is perhaps the best approach."

Read the whole story
fxer
5 hours ago
reply
"Counting moons as planets seems strange to me and I imagine would irritate the average layman."
Bend, Oregon
satadru
16 hours ago
reply
New York, NY
Share this story
Delete

Pluto scientists are mad as hell and they’re not going to take it anymore

1 Comment and 4 Shares

NASA/JHUAPL/SwRI

It's no secret that Alan Stern and other scientists who led the New Horizons mission were extremely displeased by Pluto's demotion from planet status in 2006 during a general assembly of the International Astronomical Union. They felt the IAU decision undermined the scientific and public value of their dramatic flyby mission to the former ninth planet of the Solar System.

But now the positively peeved Pluto people have a plan. Stern and several colleagues have proposed a new definition for planethood, which they intend to submit for consideration at the next general assembly of the IAU. The final arbiters of astronomical definitions will next gather in Vienna in August 2018.

In technical terms, the proposal redefines planethood by saying, "A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid regardless of its orbital parameters." More simply, the definition can be stated as, “round objects in space that are smaller than stars."

Here's the thing about the new definition—a lot of bodies in the Solar System meet the criteria. Pluto does, of course, but so do many moons, including our own around Earth. There are also dozens of objects discovered in the Kuiper Belt, beyond Pluto's orbit, that meet the definition. In fact, the tally of "planets" under the new definition is now 110 and rising. (Also, Obi-Wan Kenobi would be proven correct. The Death Star would indeed be no moon but rather a planet, too.)

And what of the poor students who have struggled to memorize the eight planets of the Solar System, with sayings such as "My Very Educated Mother Just Served Us Nachos?" Stern and his colleagues counter: "Certainly 110 planets is more than students should be expected to memorize, and indeed they ought not." They also raise a good point, notably that students don't learn science by memorizing things but rather understanding how things work.

"Understanding the natural organization of the Solar System is much more informative than rote memorization," the proposal states. "Teaching the zones of the Solar System from the Sun outward and the types of planets and small bodies in each is perhaps the best approach."

Read the whole story
fxer
5 hours ago
reply
"Counting moons as planets seems strange to me and I imagine would irritate the average layman."
Bend, Oregon
satadru
16 hours ago
reply
New York, NY
Share this story
Delete

Pluto scientists are mad as hell and they’re not going to take it anymore

1 Comment and 4 Shares

NASA/JHUAPL/SwRI

It's no secret that Alan Stern and other scientists who led the New Horizons mission were extremely displeased by Pluto's demotion from planet status in 2006 during a general assembly of the International Astronomical Union. They felt the IAU decision undermined the scientific and public value of their dramatic flyby mission to the former ninth planet of the Solar System.

But now the positively peeved Pluto people have a plan. Stern and several colleagues have proposed a new definition for planethood, which they intend to submit for consideration at the next general assembly of the IAU. The final arbiters of astronomical definitions will next gather in Vienna in August 2018.

In technical terms, the proposal redefines planethood by saying, "A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid regardless of its orbital parameters." More simply, the definition can be stated as, “round objects in space that are smaller than stars."

Here's the thing about the new definition—a lot of bodies in the Solar System meet the criteria. Pluto does, of course, but so do many moons, including our own around Earth. There are also dozens of objects discovered in the Kuiper Belt, beyond Pluto's orbit, that meet the definition. In fact, the tally of "planets" under the new definition is now 110 and rising. (Also, Obi-Wan Kenobi would be proven correct. The Death Star would indeed be no moon but rather a planet, too.)

And what of the poor students who have struggled to memorize the eight planets of the Solar System, with sayings such as "My Very Educated Mother Just Served Us Nachos?" Stern and his colleagues counter: "Certainly 110 planets is more than students should be expected to memorize, and indeed they ought not." They also raise a good point, notably that students don't learn science by memorizing things but rather understanding how things work.

"Understanding the natural organization of the Solar System is much more informative than rote memorization," the proposal states. "Teaching the zones of the Solar System from the Sun outward and the types of planets and small bodies in each is perhaps the best approach."

Read the whole story
fxer
5 hours ago
reply
"Counting moons as planets seems strange to me and I imagine would irritate the average layman."
Bend, Oregon
satadru
16 hours ago
reply
New York, NY
Share this story
Delete

The Difference Between Blackstrap and True Molasses

1 Share

Got a jar of blackstrap in the pantry? For the love of all that is sweet and delicious, please don't use it as a substitute for true molasses. Read More
Read the whole story
fxer
6 hours ago
reply
Bend, Oregon
Share this story
Delete

Supreme Court To Decide If Mexican Nationals May Sue For Border Shooting

1 Share

Relatives of Sergio Hernández sit in Ciudad Juarez at the U.S.-Mexico border, on the second anniversary of his killing in 2012. Jesus Alcazar/AFP/Getty Images hide caption

toggle caption
Jesus Alcazar/AFP/Getty Images

Relatives of Sergio Hernández sit in Ciudad Juarez at the U.S.-Mexico border, on the second anniversary of his killing in 2012.

Jesus Alcazar/AFP/Getty Images

The cellphone video is vivid. A border patrol agent aims his gun at an unarmed 15-year-old some 60 feet away, across the border with Mexico, and shoots him dead.

On Tuesday, the U.S. Supreme Court hears arguments in a case testing whether the family of the dead boy can sue the agent for damages in the U.S.

Between 2005 and 2013, there were 42 such cross-border shootings, a dramatic increase over earlier times.

The shooting took place on the border between El Paso, Texas, and Juárez, Mexico.

The area is about 180 feet across. Eighty feet one way leads to a steep incline and an 18-foot fence on the U.S. side — part of the so-called border wall that has already been built. An almost equal distance the other way is another steep incline leading to a wall topped by a guardrail on the Mexican side.

In between is a the dry bed of the Rio Grande with an invisible line in the middle that separates the U.S. and Mexico. Overhead is a railroad bridge with huge columns supporting it, connecting the two countries.

In June 2010, Sergio Hernández and his friends were playing chicken, daring each other to run up the incline on the U.S. side and touch the fence, according briefs filed by lawyers for the Hernández family.

At some point U.S. border agent Jesus Mesa, patrolling the culvert, arrived on a bicycle, grabbed one of the kids at the fence on the U.S. side, and the others scampered away. Fifteen-year-old Sergio ran past Mesa and hid behind a pillar beneath the bridge on the Mexican side.

As the boy peeked out, Agent Mesa, 60 feet or so away on the U.S. side, drew his gun, aimed it at the boy, and fired three times, the last shot hitting the boy in the head.

Although agents quickly swarmed the scene, they are forbidden to cross the border. They did not offer medical aid, and soon left on their bikes, according to lawyers for the family.

A day after the shooting, the FBI's El Paso office issued a press release asserting that agent Mesa fired his gun after being "surrounded" by suspected illegal aliens who "continued to throw rocks at him."

Two days later, cell phone videos surfaced contradicting that account. In one video the boy's small figure can be seen edging out from behind the column; Mesa fires, and the boy falls to the ground.

"The statement literally says he was surrounded by these boys, which is just objectively false," says Bob Hilliard, who represents the family. Pointing to the cell phone video, he says it is "clear that nobody was near " agent Mesa.

In one video, a woman's voice is heard saying that some of the boys had been throwing rocks, but the video does not show that, and by the time the shooting takes place, nobody is surrounding agent Mesa.

The U.S. Department of Justice decided not to prosecute Mesa. Among other things, the department concluded that it did not have jurisdiction because the boy was not on U.S. soil when he was killed.

Mexico charged the agent with murder, but when the U.S. refused to extradite him, no prosecution could go forward.

U.S. Customs and Border Patrol did not discipline agent Mesa—a fact that critics, including high-ranking former agency officials, say reflects a pattern inside the agency.

The parents of the slain boy, however, have sued Mesa for damages, contending that the killing violated the U.S. Constitution by depriving Sergio Hernández of his life.

A border in the Rio Grande culvert divides the Mexican city of Juárez (bottom) and the U.S. city of El Paso, Texas, shown here in 2010. Alexandre Meneghini/AP hide caption

toggle caption
Alexandre Meneghini/AP

A border in the Rio Grande culvert divides the Mexican city of Juárez (bottom) and the U.S. city of El Paso, Texas, shown here in 2010.

Alexandre Meneghini/AP

"I can't believe that this is allowed to happen - that a border patrol agent is allowed to kill someone on the Mexican side, and nothing happens," Sergio's mother, Maria Guadalupe Güereca Betancour, says through an interpreter.

As the case comes to the Supreme Court, there has been no trial yet and no court finding of facts. Mesa continues to maintain that he shot the boy in self-defense after being surrounded by rock-throwing kids.

That's a scenario that Mesa's lawyers say is borne out by other videos from stationary cameras that have not been released to the public.

"It was clear that Agent Mesa was in an area that is wrought with narcotics trafficking and human trafficking," asserts Randolph Ortega, who represents Mesa on behalf of the border patrol agents union. "And it's clear that, in my opinion, he was defending himself."

The only question before the Supreme Court centers on whether the Hernández family has the right to sue. A divided panel of the Fifth Circuit Court of Appeals concluded that no reasonable officer would have done what Agent Mesa did, and that therefore the family could sue.

However, the full court of appeals reversed that judgment, ruling that because the Hernández boy was standing on the Mexico side of the border and was a Mexican citizen with no ties to the United States, his family could not sue for a violation of the U.S. Constitution. Moreover, the appeals court said that even if the facts as alleged by the Hernández family are true, Mesa is entitled to qualified immunity, meaning he cannot be sued because there is no clearly established body of law barring his conduct.

Lawyers for the Hernández family counter that Supreme Court precedents establish a practical approach in determining whether there is a right to sue for the use of excessive force in circumstances like these. Lawyer Hilliard says yes, the boy was across the border when the shots were fired, but by just 60 feet.

"This is a domestic action by a domestic police officer standing in El Paso, Texas, who is to be constrained by this country's constitution," Hilliard contends. "There's a U.S. Supreme Court case that says a law enforcement officer cannot seize an individual by shooting him dead, which is what happened in this case."

Hilliard argues that if you follow the border patrol's argument to its necessary conclusion, "it means that a law enforcement officer is immune to the Constitution when exercising deadly force across the border.

"He could stand on the border and target practice with the kids inside the culvert," Hilliard warns.

But lawyer Ortega replies that's not true, and asks how the court should draw the line.

"How far does it extend? Does it extend 40 feet? As far as the bullet can travel? All of Juárez, Mexico? All of (the state of) Chihuahua, Mexico? Where does the line end?"

Backed by the federal government, he suggests that a ruling in favor of the Hernández family would mean foreigners could sue over a drone attack.

Now it's up to the Supreme Court to decide where to draw the line.

Read the whole story
fxer
6 hours ago
reply
Bend, Oregon
Share this story
Delete

Israeli Soldier Who Killed A Wounded Palestinian Is Sentenced To 18 Months

1 Share

The father, center, and mother of Palestinian Abdul Fatah al-Sharif watch the sentencing hearing of Israeli soldier Elor Azaria, who killed their son in March of 2016. Hazem Bader/AFP/Getty Images hide caption

toggle caption
Hazem Bader/AFP/Getty Images

The father, center, and mother of Palestinian Abdul Fatah al-Sharif watch the sentencing hearing of Israeli soldier Elor Azaria, who killed their son in March of 2016.

Hazem Bader/AFP/Getty Images

More than a month after a military court found Sgt. Elor Azaria guilty of manslaughter, the soldier has been ordered to serve an 18-month prison sentence. Azaria, 21, who worked as an army medic, shot and killed Abdel Fattah al-Sharif, a Palestinian assailant who was already incapacitated.

The soldier's defense team has said it plans to appeal any sentence that includes jail time. Since last month's verdict, many on Israel's right wing have called for Azaria to be pardoned — something that Prime Minister Benjamin Netanyahu has said he supports. A potential pardon would have to come from Israel's president.

Video of the shooting, which took place in the occupied West Bank last March, sparked strong and disparate reactions in Israel and beyond, fueling debate over the proper use of force and rules of engagement.

Here's how NPR's Joanna Kakissis described the videotaped events in her report from Jerusalem last month:

"Al-Sharif had been shot and wounded after stabbing an Israeli soldier. Eleven minutes later, Azaria shot the motionless Al-Sharif in the head.

"A human rights activist filmed the killing. The video went viral.

"Many Israelis say Azaria was justified because he feared Al-Sharif might have been wearing an explosive belt. But Azaria's superior officers say his actions contradict the army's ethical standards."

The crime of manslaughter could have exposed Azaria to a 20-year prison term; prosecutors had sought a sentence of 3-5 years. In addition to the prison sentence, the military court demoted Azaria to the rank of private.

Read the whole story
fxer
6 hours ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories