Georgia Passes Anti-Infosec Legislation (Electronic Frontier Foundation)

Despite the full-throated objections of the cybersecurity community, the Georgia legislature has passed a bill that would open independent researchers who identify vulnerabilities in computer systems to prosecution and up to a year in jail.

EFF calls upon Georgia Gov. Nathan Deal to veto S.B. 315 as soon as it lands on his desk.

For months, advocates such as Electronic Frontiers Georgia, have descended on the state Capitol to oppose S.B. 315, which would create a new crime of “unauthorized access” to computer systems. While lawmakers did make a major concession by exempting terms of service violations under the measure—an exception we’ve been asking Congress for years to carve out of the federal Computer Fraud & Abuse Act (CFAA)—the bill stills fall short of ensuring that researchers aren’t targeted by overzealous prosecutors. This has too often been the case under CFAA.

“Basically, if you’re looking for vulnerabilities in a non-destructive way, even if you’re ethically reporting them—especially if you’re ethically reporting them—suddenly you’re a criminal if this bill passes into law,” EF Georgia’s Scott Jones told us in February.

Andy Green, a lecturer in information security and assurance at Kennesaw State University concurred.

“I’m putting research on hold with college undergrad students because it may open them up to criminal penalties,” Green told the Parallax. “It’s definitely giving me pause right now.” 

Up until this week, Georgia has positioned itself as a hub for cybersecurity research, with well-regarded university departments developing future experts and the state investing $35 million to expand the state’s cybersecurity training complex. That is one reason it’s so unfortunate that lawmakers would pass a bill that would deliberately chill workers in the field. Cybersecurity firms—and other tech companies—considering relocations to Georgia will likely think twice about moving to a state that is so hostile and short-sighted when it comes to security research.

S.B. 315 is a dangerous bill with ramifications far beyond what the legislature imagined, including discouraging researchers from coming forward with vulnerabilities they discover in critical systems. It’s time for Governor Deal to step in and listen to the cybersecurity experts who keep our data safe, rather than lawmakers looking to score political points.

Posted in Uncategorized Tagged

Beyond Implementation: Policy Considerations for Secure Messengers (Electronic Frontier Foundation)

One of EFF’s strengths is that we bring together technologists, lawyers, activists, and policy wonks. And we’ve been around long enough to know that while good technology is necessary for success, it is rarely sufficient. Good policy and people who will adhere to it are also crucial. People write and maintain code, people run the servers that messaging platforms depend on, and people interface with governments and respond to pressure from them.

We could never get on board with a tool—even one that made solid technical choices—unless it were developed and had its infrastructure maintained by a trustworthy group with a history of responsible stewardship of the tool. Trusting the underlying technology isn’t enough; we have to be able to trust the people and organizations behind it. Even open source tools that function in a distributed manner, rather than using a central server, have to be backed up by trustworthy developers who address technical problems in a timely manner.

Here are a few of the factors beyond technical implementation that we consider for any messenger:

  • Developers should have a solid history of responding to technical problems with the platform. This one is critical. Developers must not only patch known issues in a timely manner, they must also respond to particularly sensitive users’ issues particularly quickly. For instance, it was reported that in 2016, Telegram failed to protect its Iranian users in a timely manner in response to state-sponsored attacks. That history gives us more than a little pause.
  • Developers should have a solid history of responding to legal threats to their platform. This is also critical. Developers must not only protect their users from technical threats, but from legal threats as well. Two positive examples come readily to mind: Apple and Open Whisper Systems, the developers of iMessage and Signal respectively. Apple famously stood up for the security of their users in 2016 in response to an FBI call for a backdoor in their iPhone device encryption, and Open Whisper Systems successfully fought back against a grand jury subpoena gag order.
  • Developers should have a realistic and transparent attitude toward government and law enforcement. This is part of the criteria by which we evaluate companies in our annual Who Has Your Back? report. We’re strongly of the opinion that developers can’t just stick their heads in the sand and hope that the cops never show up. They have to have a plan, law enforcement guidelines, and a transparency report. Any tool lacking those is asking for trouble.

We discuss these concerns here to highlight the undeniable fact that developing and maintaining secure tools is a team sport. It’s not enough that an encrypted messaging app use reliable and trusted encryption primitives. It’s not enough that the tool implement those primitives well, wrap them in a good UX, and keep the product maintained. Beyond all that, the team responsible for the app must be versed in law and technology policy, be available and responsive to their users’ real-world threats, and make a real effort to address the security trade-offs their products present.


This post is part of a series on secure messaging.
Find the full series here.

Posted in Uncategorized Tagged

The Apache News Round-up: week ending 30 March 2018 (Apache Software Foundation Blogs)

Let's bid March farewell with a look back at the many Apache activities over the past week:

But first: cake and party favors!
 - The Apache® Software Foundation Celebrates 19 Years of Open Source Leadership "The Apache Way" https://s.apache.org/gK4Q
 - Read "Open – For Business – At the ASF" by Merv Adrian, VP Research at Gartner https://blogs.gartner.com/merv-adrian/2018/03/27/open-for-business-at-the-asf/
 - A look at the "Apache at 19" promo at https://youtu.be/Fqk_rlKiVIs

ASF Board –management and oversight of the business affairs of the corporation in accordance with the Foundation's bylaws.
 - Next Board Meeting: 18 April. Board calendar and minutes http://apache.org/foundation/board/calendar.html

ApacheCon™ –the ASF's official global conference series.
 - ENDS TODAY: CFP for ApacheCon 24-29 September in Montreal http://apachecon.com/
 - Travel Assistance applications now being accepted for ApacheCon/Montreal https://www.apache.org/travel/

ASF Infrastructure –our distributed team on three continents keeps the ASF's infrastructure running around the clock.
 - 7M+ weekly checks yield kicking performance at 99.98% uptime. http://status.apache.org/

ASF Operations Factoid –this week, 519 Apache contributors changed 897,504 lines of code over 3,230 commits. Top 5 contributors, in order, are: Hanisha Koneru, Carlos Sanchez Gonzalez, Jean-Baptiste Onofré, Till Rohrmann, and Tellier Benoit.

Apache Accumulo™ –a sorted, distributed key/value store that provides robust, scalable data storage and retrieval. 
 - Apache Accumulo 1.7.4 released https://accumulo.apache.org/

Apache Ant™ –a Java library and command-line tool that helps building software.
 - Apache Ant 1.9.11 and 1.10.3 http://ant.apache.org/

Apache Any23™ –Anything To Triples is a library, a web service and a command line tool that extracts structured data in RDF format from a variety of Web documents.
 - Apache Any23 2.2 released http://any23.apache.org/

Apache Commons™ Text –Open Source software library provides a host of algorithms focused on working with strings and blocks of text.
 - Apache Commons Text 1.3 released http://commons.apache.org/

Apache Groovy™ –a multi-facet programming language for the JVM.
 - Apache Groovy 2.4.15 released https://groovy.apache.org/

Apache HTTP Server™ –the world's most popular Web server software.
 - Apache HTTP Server 2.4.33 released http://httpd.apache.org/

Apache Jackrabbit™ Oak – scalable, high-performance hierarchical content repository designed for use as the foundation of modern world-class Web sites and other demanding content applications.
 - Apache Jackrabbit Oak 1.0.42 released http://jackrabbit.apache.org/

Apache Kafka™ –a distributed, fault tolerant, publish-subscribe messaging.
 - Apache Kafka 1.1.0 released http://kafka.apache.org/

Apache Kudu™ –an Open Source storage engine for structured data that supports low-latency random access together with efficient analytical access patterns.
 - Apache Kudu 1.7.0 released https://kudu.apache.org/

Apache Kylin™ –an Open Source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Apache Hadoop, supporting extremely large datasets.
 - Apache Kylin 2.3.1 released https://kylin.apache.org/

Apache PDFBox™ –an Open Source Java tool for working with PDF documents.
 - Apache PDFBox 2.0.9 released http://pdfbox.apache.org/

Apache Qpid™ JMS –AMQP enterprise messaging implementation.
 - Apache Qpid JMS 0.31.0 released http://qpid.apache.org/

Apache Struts™ –a free Open Source framework for creating Java Web applications.
 - Immediately upgrade commons-fileupload to version 1.3.3 http://mail-archives.apache.org/mod_mbox/www-announce/201803.mbox/%3CCAMopvkNu%2BMdh%3DXCDQJmKYfjd%3DbdCFkhNXvWbYzvmXuLNw0aYbg%40mail.gmail.com%3E
 - A crafted XML request can be used to perform a DoS attack when using the Struts REST plugin http://mail-archives.apache.org/mod_mbox/www-announce/201803.mbox/%3CCAMopvkNZoHH3qx%2B9brdRdAoZ7zy9w6QPotjohVwqsopGEk%3Dsgw%40mail.gmail.com%3E

Did You Know?

 - Did you know that HBaseCon and PhoenixCon will be taking place 18 June in San Jose? Contact the Apache HBase and Phoenix project communities for more information http://hbase.apache.org/ and http://phoenix.apache.org/

 - Did you know that Orange Moldova uses Apache Wicket to Orange Moldova to build its Webapps? http://wicket.apache.org/

 - Did you know that new projects in the Apache Incubator include Druid (Big Data), Dubbo (Java RPC framework), ECharts (charts and data visualization tool), among others? http://incubator.apache.org/

Apache Community Notices:

 - The Apache Software Foundation 2018 Vision Statement https://s.apache.org/zqC3

 - Apache in 2017 - By The Digits https://s.apache.org/h8do

 - Foundation Statement –Apache Is Open. https://s.apache.org/PIRA

 - "Success at Apache" focuses on the processes behind why the ASF "just works". https://blogs.apache.org/foundation/category/SuccessAtApache

 - Please follow/like/re-tweet the ASF on social media: @TheASF on Twitter and on LinkedIn at https://www.linkedin.com/company/the-apache-software-foundation

 - Do friend and follow us on the Apache Community Facebook page https://www.facebook.com/ApacheSoftwareFoundation/ and Twitter account https://twitter.com/ApacheCommunity

 - The list of Apache project-related MeetUps can be found at http://apache.org/events/meetups.html

 - Members of the Apache community will be presenting at DataWorks Summit 16-19 April 2018 in Berlin https://dataworkssummit.com/

 - Open Expo Europe - 6-7 June 2018 in Madrid https://openexpoeurope.com/

 - Meet members of the Apache community at Open Expo Madrid 6-7 June 2018 http://www.openexpo.es/en/

 - We're teaming up with Berlin Buzzwords - 10-12 June 2018 (Apache Lounge dates: 11-12 June) https://berlinbuzzwords.de/

 - The 2018 Apache EU Roadshow will be held during FOSS Backstage in Berlin 13-14 June 2018 https://foss-backstage.de/

 - Apache Big Data project communities will be participating at DataWorks Summit 17-21 June 2018 in San Jose https://dataworkssummit.com/

 - ApacheCon North America will be held 24-29 September in Montreal http://apachecon.com/ **CFP IS OPEN!**

 - ASF Quarterly Report: Operations Summary: November 2017 - January 2018 https://s.apache.org/UtBD

 - ASF Annual Report is available at https://s.apache.org/FY2017AnnualReport

 - Find out how you can participate with Apache community/projects/activities --opportunities open with Apache HTTP Server, Avro, ComDev (community development), Directory, Incubator, OODT, POI, Polygene, Syncope, Tika, Trafodion, and more! https://helpwanted.apache.org/

 - Are your software solutions Powered by Apache? Download & use our "Powered By" logos http://www.apache.org/foundation/press/kit/#poweredby

= = =

For real-time updates, sign up for Apache-related news by sending mail to announce-subscribe@apache.org and follow @TheASF on Twitter. For a broader spectrum from the Apache community, https://twitter.com/PlanetApache provides an aggregate of Project activities as well as the personal blogs and tweets of select ASF Committers.

# # #

Some Easy Things We Could Do to Make All Autonomous Cars Safer (Electronic Frontier Foundation)

Incident response standards, data sharing, and not blaming humans unfairly for the failures of machines

More than a week after an Uber vehicle driving in autonomous mode killed a pedestrian in Tempe, Arizona — the first pedestrian death by a self-driving car — we still don’t know what exactly went wrong. Video of the crash shows that the pedestrian, Elaine Herzberg, walked in front of a moving vehicle. But the vehicle didn’t appear to react, and there are many unanswered questions as to why it did not. Did the car’s Velodyne Light Detection and Ranging (LIDAR) or other sensors get enough signal to detect her? Did Uber’s decision to scale down to a single LIDAR sensor from the seven LIDAR sensors on earlier vehicle models, which created more LIDAR blindspots, play a role? Where the vehicle’s LIDAR sensors disabled? Did the fact that she was a pedestrian walking a bicycle confuse any of the car’s vision systems? Did the vehicle in fact slow down?

Regardless of the details, the most important question we should all be asking is: What can Uber and its competitors do to learn collectively from this incident and (hopefully) avoid similar incidents in the future? 

One thing all self-driving car companies could and should do is develop incident-response protocols, and those protocols should include sharing data about collisions and other safety incidents. That data needs to be shared between autonomous car makers, government regulators, academic research labs, and ideally the public,[1] so they can analyze what went wrong, learn from each other’s mistakes, and all get safer faster. This seems fairly obvious, but self-driving car companies are racing to develop the first fully autonomous, “Level 5” vehicle. Acting in isolation, they have few if any incentives to share data. But if sharing is the rule, their vehicles will be collectively safer, and the public will be much better off. 

While autonomous vehicles are hailed for their promise of reducing vehicle fatalities, the Uber accident has raised questions about whether and when autonomous vehicles will really be safer than human drivers. If accidents continue at this initial rate, some of the early self-driving car fleets might be much more dangerous than regular vehicles. That isn’t a reason to stop. We are very early in the technology’s development; early airplanes were disastrously dangerous, and dramatic safety gains have continued to the present day. But it is a reason to ask, how can we ensure that safety improvements happen as fast as possible?

This is especially true given that, unlike the pilots who died flying early airplanes, pedestrians injured or killed by autonomous vehicles are not the ones who decided to get into them and that it was worth the risk. It’s the companies who are deciding what risk it will impose on the rest of us. We have the right to understand that risk, what companies are doing to mitigate it, and whether they’ve put us at any unnecessary risk, which it appears Uber may have done here. We also have a right to demand that they take reasonable steps to help make the technology safer for everyone, such as sharing incident sensor data.

After last week’s incident, Uber immediately halted testing of autonomous vehicles in cities across North America and has reached an undisclosed settlement with the victim’s family. It is also currently cooperating with Tempe officials, the National Highway Traffic Safety Administration (NHTSA), and the National Transportation Safety Board (NTSB) on their investigations into the incident.

Regulators have, up until now, largely adopted a light-touch approach to regulating autonomous cars. Arizona, for example, has virtually no rules dictating where and when testing can occur, and imposes no reporting or disclosure requirements, even about crashes, though following the accident it has banned Uber from testing self-driving cars in the state. California has granted 50 manufacturers permits to test autonomous cars within the state, so long as there is a safety driver behind the wheel; next month, manufacturers will be able to apply to test and deploy cars without a safety driver. NHTSA, which we criticized last year for trying to push through an ill-thought-out proposal to force connected cars to talk to each other,[2] prefers “voluntary guidance” over mandatory standards for autonomous driving systems. Waymo, Uber, and other self-driving car companies, just days before the recent accident, urged Congress to pass legislation that would facilitate the deployment of self-driving cars throughout the United States.

“Whenever you release a new technology there’s a whole bunch of unanticipated situations,” Arun Sundararajan, a professor at New York University’s business school, told Bloomberg. “Despite the fact that humans are also prone to error, we have as a society many decades of understanding of those errors." When it comes to machines and algorithms, many people expect them to always be right. But they won’t always be right — especially as new technologies are being developed. And because of this misperception, how companies respond when things do go wrong is going to play an increasingly important role in the development the autonomous and intelligent systems they are trying to build.

The autonomous car industry has not always done a great job with this. Tesla, for instance, responded to two incidents involving vehicles traveling in “autopilot” mode in January by simply reiterating their policy — the driver is supposed to remain fully attentive and keep their their hands on the wheel at all times — rather than by trying to address the underlying consumer confusion generated by the technology’s misleading name. And after the company’s first autopilot death in June 2016, it “repeatedly went out of its way to shift blame for the accident” in its 537-word statement, even while acknowledging that the car’s sensors had failed to distinguish between a large white truck and the bright sky in the background. It also referred to the driver Joshua Brown’s death as a “statistical inevitability” on its blog. One crisis management consultant has called Tesla’s response a “perfect case study in the wrong way to handle this sort of crisis.”

Even after last week’s tragic Uber accident, the instinct of many (though not Uber’s, as far as we know) was to blame the humans. Many initial reports assumed that the pedestrian jumped off the median in front a car, a theory which the incident video disproved. Later, questions were raised over whether the safety driver was paying adequate attention. We are somewhat concerned by that reaction. Decades of research show that humans are notoriously bad at doing exactly what the safety drivers are supposed to be doing: paying constant attention when they are not actively engaged in the activity. We aren’t even all that good at paying complete attention while we are actively driving. We must avoid relying on humans as liability sponges, or “moral crumple zones” that “bear the brunt of the moral and legal penalties when the overall system fails.”

Instead of pointing fingers, we need to focus on making the technology safer, and quickly. And the very first step in doing so is to ensure that when a terrible accident like this occurs, the company involved in the accident shares all of the underlying sensor data with other autonomous car makers so that no autonomous vehicle has to repeat the same mistake.

***

[1] The exact scope of data that should be shared, and who it should be shared with, involves some privacy tradeoffs. At minimum, companies should share the sensor data immediately preceding accidents or circumstances that could contribute to accidents (such as when a human safety driver needs to take control, or when a computer vision system fails to detect an obstacle that was found by LIDAR). It could also potentially include computer vision architectures and neural network models, as well as sensor data. Even when vehicles have different types of sensors, there will often be opportunities for cross-training or cross-testing.

When data is hard to sufficiently anonymize, this may require extra protections, such as contractual restrictions against de-anonymizing humans present in the data. If there were reliable ways to anonymize large amounts of vehicle sensor data, it could be desirable to share all of the data from the self-driving vehicle fleets, to enable its inclusion in training datasets, but we are not presently optimistic that such anonymization methods are available.

[2] The agency thankfully backed away from its plan, but out of concern over placing too much of a burden on automanufactuers rather than security or privacy.

Posted in Uncategorized Tagged