All posts by admin

A real-time bonanza: Facebook’s Wormhole and Yahoo’s streaming Hadoop

If you’re into systems that can share data among each other in real time, this has been a good week. On Tuesday, Yahoo open sourced its version of the popular Storm stream-processing software that’s able to run inside Hadoop clusters. Then, on Thursday, Facebook detailed a system called Wormhole that informs the platform’s myriad applications when changes have occurred in another, so that each one is working from the newest data possible.

The Yahoo work is actually pretty important. Among the features Hadoop users have been demanding from the platform is a transition from batch-processing-only mode into something that can actually deal with data in real time. The reason for the demand is quite simple: Although being able to analyze or transform data minutes to hours after it’s generated is helpful for certain analytic tasks, it’s not too helpful if you want an application to be able to act on data as it hits the system.

A service like Twitter is a prime example of where Storm can be valuable. Twitter uses Storm to handle tweets so users’ Timelines are up to date and do things like real-time analytics and spotting emerging trends. In fact, it was Twitter that open sourced Storm in 2011 after buying Storm creator Backtype in order to get access to the technology and its developers.

Among web companies, Storm has become quite popular as a stream-processing complement to Hadoop since then. And now Yahoo has made possible a much tighter integration between the two — even to the point that Storm can borrow cycles from batch-processing nodes if it needs some extra juice. That’s a valuable feature — just last week I heard Twitter engineer Krishna Gade bemoan Storm’s auto-scaling limitations during a talk at Facebook’s Analytics @ Web Scale event.

Krishna Gade talking Storm at the Facebook event.

Krishna Gade talking Storm at the Facebook event.

The Storm-on-Hadoop work is among the first of many promised improvements to come thanks to YARN, a major update to the Apache Hadoop 2.0 code that lets Hadoop clusters run multiple processing frameworks simultaneously. Twitter has been using the open source Mesos resource manager to achieve the same general capabilities, but Gade’s colleague Dmitriy Ryaboy said during the same talk that the company is switching to YARN because it expects — probably correctly — much more community effort will go toward continuously improving its capabilities.

Facebook’s Wormhole project isn’t open source (as far as I can tell), but its lessons are still valuable (and LinkedIn has open sourced a similar technology named Kafka). It’s what’s called a publish-subscribe system, which is essentially a concise way of saying that it manages communications between applications that publish information (e.g., updates to a database) and subscribe to the information their fellow applications are publishing. At Facebook, for example, Wormhole sends changes to Facebook’s master user database to Graph Search so that search results are as up to date as possible, or to its Hadoop environment so analytics jobs have the newest data.


Of course, like all things Facebook (its new Presto interactive query engine comes to mind), Wormhole is built to scale. Latency is in the low milliseconds and, blog post author Laurent Demailly notes

“Wormhole processes over 1 trillion messages every day (significantly more than 10 million messages every second). Like any system at Facebook’s scale, Wormhole is engineered to deal with failure of individual components, integrate with monitoring systems, perform automatic remediation, enable capacity planning, automate provisioning and adapt to sudden changes in usage pattern.”

Although they were developed within separate companies, there’s actually a tie that binds Yahoo’s Storm-in-Hadoop work and Facebook’s Wormhole. As web companies grow from their initial applications into sprawling business composed of numerous applications and services, so too do their infrastructures. To address the differing needs of their various systems at the data level, the companies have begun breaking them down by their latency requirements (i.e., real-time, near real-time and batch, however they choose to word them) and then building tools such as Storm and Wormhole to manage to flow of data between the systems.

We’ve previously explained in some detail how LinkedIn and Netflix have built their data architectures around these principles, and we’ll hear a lot more about how they and other web companies are tackling this situation at Structure next week. Among the speakers are senior engineers and technology executives from Facebook, Google, LinkedIn, Box, Netflix and Amazon.

Feature image courtesy of Shutterstock user agsandrew.

Done deal: AppFog melds into Savvis Cloud

It happened a little quicker than we thought, but AppFog and its Platform-as-a-Service will be part of CenturyLink’s Savvis cloud lineup.

scaffoldingTerms of the acquisition, first reported by GigaOM, were not disclosed, but in a statement, CenturyLink said the AppFog public PaaS will be offered via its savvisdirect online catalog that offers an array of hosted services. Savvis will also offer private, dedicated AppFog deployments to big customers.

AppFog CEO Lucas Carlson (pictured above) is now vice president, cloud evangelist for Savvis.

This acquisition is part of a bigger trend in which PaaS providers are trying to sell big companies on this model to develop and deploy their custom corporate applications but traction has been limited.

Software-as-a-Service took off as provided a strong argument for moving all the capex spending from on-premises CRM and salesforce automation applications to a subscription model that is paid out of operational expenses. Infrastructure-as-a-service took developers by storm when they saw they could bypass IT to spin up and down development sandboxes cheaply as needed. But many big companies still want their corporate apps to run in- house, even if they’re developed outside.

But lots of vendors continue to push the corporate PaaS case. Red Hat just made its OpenShift PaaS generally available and Pivotal launched a re-architected Cloud Foundry V.2 as a paid-only service. Some expect that other hosting providers, all of which are trying to build credibility with cloud services, will follow CenturyLink/Savvis footsteps and buy up remaining independent public PaaS providers.

iOS 7 is the essence of less

MobileBeat 2013

Tickets On Sale Now

Simplicity is the ultimate sophistication.

And if simplicity is the ultimate sophistication, iOS 7 is very, very sophisticated.

The most powerful man in the room doesn’t need a loud voice. The securely rich don’t need to flaunt their wealth. The cool kids are the ones who aren’t trying to be cool. The truly elegant realize that understatement is the essence of sophistication.

iOS 7 camera app

iOS 7 camera app

And the ultimate interface is the least obvious to the user.

The latest version of iOS is the culmination of Apple’s six-year journey with iPhone, and that journey mirrors the changes that successfully brought Apple back from the brink of death in the late 1990s. The product that saved Apple was the iMac, and Steve Jobs threw it like a grenade into the the computer industry. Where the industry was grey, iMac was color. Where the industry was separate pieces wired together, iMac was unified and singular. Where the industry was opaque, iMac was translucent.

But the original iMac was also a little childish — a little young — before it became the singular objét d’art that it is today. And like iMac’s 15-year transition from fruity to elegant, iOS has now crossed the user interface Rubicon from Microsoft Bob to the Bauhaus — from decoration to design.

Versions 1 through 6 were Apple’s mobile juvenilia; iOS 7 is Apple at the zenith of its design zen.

Less inter, more face

By definition, an interface is a bridge. A connection. Something that takes one thing and attaches it to another. A user interface takes what we want out of the machine and supplies it to us in a format we can see, understand, and use.

Siri in iOS 7

Siri in iOS 7

Glancing back at former iOS versions after using iOS 7 is an interesting experiment. As revolutionary, incredible, and praised as it was, old iOS seems obvious, heavy-handed, and almost crude in comparison. Buttons are massive, messy objects that just beg to be interpreted as old-world meatspace buttons that physically depress with a ker-thunk! as you hit them. Borders surround chunks of information like arm-in-arm security staff holding back the outside world from some very exclusive event.

Now, iOS 7 is much more authentically digital and much less representationally physical.

We’ve all grown up, Apple is saying, and we no longer need these real-world touchable referents in order to understand digital information. And, in fact, they’re holding us back, restraining us and surrounding the content we want with gimmick and gewgaw.

Who among us has ever used a rolodex? Do you keep a pad of paper handy at all times to jot down notes with pen and ink? Do we really need green felt, as Apple’s Federighi joked about in the recent WWDC keynote, to make us think of games and fun? And do we all have smoking jackets, cee-gars, and snifters brimming with brandy as we amble back to the snooker table, one hand in our houndstooth jackets?

No, no, and no.

One of the key values in iOS 7, Apple says, is deference. The user interface is there to help users get, understand, and work with the data and information they want, not to compete with it. Interface should be underneath, Apple is saying. It’s kingmaker, not king; servant, not master. Background, not foreground.

Elegant doesn’t mean somber

In spite of its elegance, iOS 7 isn’t standoffish or unapproachable. Not dark and distant.

Rather — and this is why some have likened it to Windows Phone and the new Windows design language — iOS 7 is light, approachable, even colorful in places. In other words, Apple’s not trying for an affect here. It’s not trying so hard to make iOS the nth degree in haute couture so that it becomes elitist or arrogant. Rather, Apple is looking for the best way to present and offer up experiences enabled by the device in your hand — a style so unstyled it’s not really a style but an expression of humility to the text, images, data, voices, and fun that will flow through to the user.

And that’s done in living color.


Minimalism is sometimes very minimal

While I’m happy with Apple’s overall direction, let’s keep in mind that this is a beta operating system and not a universally released, fully available product. And there’s room for improvement.

Which button is pressed? Is there a button? Can you press them?

Which button is pressed? Is there a button? Can you press it?

Minimalism is a language that not everyone speaks, and sometimes less is just … less.

Just one example is in Apple’s phone app, which indicates that yes, you are on speaker mode with a very subtle light effect under the volume icon. It’s not nearly as obvious as iOS 6′s speaker mode — which is the point, after all — but the question is, is it obvious enough?

In other words, can you be so simple, so restrained, and so tentative in what you put into your user interface that you’re actually increasing the cognitive load on your users?

In a word, yes, and I think that there are some in the iPhone user camp who are going to be astonished but not delighted with some elements of the new operating system. Microsoft is going through its own challenges as users are being forced through a massive update in its desktop design language, and I think Apple will see something similar.

 Going somewhere good

That said, I like iOS 7. In fact, I really, really like iOS 7.

Photos in iOS 7

Photos in iOS 7

I have been using it for the better part of a week now, and despite the bugginess inherent in a beta operating system (sometimes the phone app hangs, sharing photos and videos doesn’t work in all cases, and so on), I would not go back.

We give bright, noisy toys to kids to hold their attention, and juveniles drive loud, flashy cars to reinforce the image of themselves that they desire to project, but we don’t need that.

We’re all grown up — mostly — and we know that simplicity is the core of sophistication.

Apple knew that back in the 1970s, when Steve Jobs adopted that as one of the company’s marketing slogans. Leonardo da Vinci knew it when he first said it.

And, I think, most Apple fans know it, too.

Image credits: Most: Brian Roizen, top, John Koetsier

Obama orders government agencies to free up wireless spectrum for the people

MobileBeat 2013

Tickets On Sale Now

President Obama is trying to get off the tech community’s shit list.

The President issued a memorandum today to expand the availability of spectrum and bolster America’s leadership in wireless innovation. He mandated that federal agencies free up a significant portion of wireless spectrum so that it can be used by individuals and businesses, and he also announced that the government will invest $100 million in research and development of spectrum-sharing technology.

The initiatives are in an effort to fuel American innovation and maintain the country’s position as a “global leader in wireless broadband technologies.”

“These new initiatives are the latest in a series of actions the administration has taken over the past four years to ensure American businesses and workers have the infrastructure they need to compete in the 21st-century economy,” said a fact sheet the administration issued this morning. 

The President issued a memorandum in 2010 called “Unleashing the Wireless Broadband Revolution,” which required that 500 MHz of spectrum be made available for commercial use within the next 10 years. However, the percentage of American homes reached by high-speed broadband networks has more than quadrupled since 2009 and is now at 80 percent. Use of mobile devices has exploded in the past few years, putting greater strains on available spectrum and making the need to increase access more urgent.

The military, government agencies, and law enforcement currently use a significant portion of available spectrum (which is a finite resource) for national security and public safety. In July 2012, the President’s Council of Advisors on Science and Technology presented a report that said “it is imperative we make enough wireless spectrum available to meet the needs of rapidly expanding and innovative sector of the economy.” Increased access will not only spur domestic economic growth but also help keep the U.S. on top of the technological hierarchy.

President Obama has been a vocal advocate for technical innovation and emphasized over and over the important role that entrepreneurship plays in driving the U.S. economy. Last week, he announced an ambitious plan to get 99 percent of American students connected to lightning-fast Internet within five years. He said that American schools, where only 20 percent of students have access to high-speed Wi-Fi, are falling behind nations like South Korea, where 100 percent of students are wired. Furthermore, the President has made a major push for STEM (science, technology, engineering, math) education. American students’ test scores in this areas lag far behind those of other countries, and there are far more open STEM jobs than qualified people to fill them.

The spectrum initiative is part of the overall mission to grow the U.S. economy through technology. The announcement comes a week after news broke about the FBI and NSA’s top secret data-mining project, PRISM. The snowball of privacy invasion keeps on growing, and today Bloomberg came out with a report that found that thousands of technology, finance, and manufacturing firms are sharing customer data with the government.

The government may be spying on your Internet activity, but it also wants you to have better access to that Internet.

While the expansion of access to spectrum and the improvement of Internet in schools are crucial steps toward supporting technical innovation and the economy, so is establishing an environment where businesses, entrepreneurs, and individuals are not indiscriminately being spied on.

Photo Credit: White House/Flickr

This tiny submarine could find life under Earth’s glaciers — or on Jupiter moon Europa

This submersible is barely the size of two soda cans, yet scientists think it might be useful for finding life on and around other planets, including Jupiter’s Europa, an ice-covered moon.

The nano-sub is called DADU and is a project of Uppsala University’s Division of Microsystems Technology. Created by researchers Jonas Jonsson, Erik Edqvist, Hugo Nguyen and Greger Thornell, the submarine is designed to explore subglacier lakes, the likes of which exist at Earth’s own poles and also host microbial life capable of surviving at extremely low temperatures.

The European Space Agency is planning a mission to measure the thickness of surface ice on Europa in 2030, and NASA is also planning a Europa orbit mission. While specific instruments haven’t yet been designated, the DADU prototype shows promise for such missions.

The project was first created as part of a partnership between NASA and the Swedish university. Nano satellites have proven useful for gathering data in low-cost missions.

Europa’s surface is covered in ice, so the scientists think a robotic mission might be able to uncover more data by drilling into the moon’s surface and exploring what lies beneath.

From a brief on the submarine and its intended original use case on Earth:

Some of these lakes have been sealed off from their surrounding environments for considerable periods of time and are thought to harbor a unique biota, and contain important information on how life conditions were several thousand years ago. The DADU project [objective] was to develop a miniature submersible vehicle, which could perform distant and autonomous exploration, and analysis in harsh and narrow environments. It was designed to be able to reach down through narrow glacial bore holes and into the previously unreachable subglacial lakes, to explore and make in-situ measurements.

The sub carries thrusters for navigation, a miniaturized side-scan sonar for subglacial imaging, a fiber-optic cable for real-time communication, a camera, an aquatic sampler, and other instruments for data collection.

Image credit: ASTC

Sixth Annual Movie-Plot Threat Contest Semifinalists

A blog covering security and security technology.

« Ricin as a Terrorist Tool


June 14, 2013

Sixth Annual Movie-Plot Threat Contest Semifinalists

On April 1, I announced the Sixth Annual Movie Plot Threat Contest:

I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services — people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

Submissions are in, and — apologies that this is a month late, but I completely forgot about it — here are the semifinalists.

  1. Crashing satellites, by Chris Battey.
  2. Attacking Dutch dams, by Russell Thomas.
  3. Attacking a drug dispensing system, by Dave.
  4. Attacking cars through their diagnostic ports, by RSaunders.
  5. Embedded kill switches in chips, by Shogun.

Cast your vote by number; voting closes at the end of the month.

Posted on June 14, 2013 at 12:20 PM

To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.

Encrypted e-mail: How much annoyance will you tolerate to keep the NSA away?

In an age of smartphones and social networking, e-mail may strike many as quaint. But it remains the vehicle that millions of people use every day to send racy love letters, confidential business plans, and other communications both sender and receiver want to keep private. Following last week’s revelations of a secret program that gives the National Security Agency (NSA) access to some e-mails sent over Gmail, Hotmail, and other services—and years after it emerged that the NSA had gained access to full fiber-optic taps of raw Internet traffic—you may be wondering what you can do to keep your messages under wraps.

The answer is public key encryption, and we’ll show you how to use it.

The uses of asymmetry

The full extent of the cooperation between the NSA and various technology companies is unclear. It will probably remain that way for the foreseeable future. For the time being, however, it seems likely that the standard cryptographic tools used to secure data “in flight”—that is to say, the SSL that protects data traveling between machines on the Internet—remain secure as long as certain best practices are used.

That protects against some threats, such as wholesale monitoring of Internet traffic of the kind the NSA is known to engage in, but it doesn’t do anything to protect data that’s “at rest.” That is to say, SSL doesn’t do anything to prevent a company like Google or Microsoft from handing over an archive of your e-mail in response to a court order. The e-mails are just lying around on some Google server somewhere.

If you don’t want a government, service provider, employer, or unauthorized party to have access to your mail at rest, you need to encrypt the mail itself. But most encryption algorithms are symmetric, meaning that the encryption key serves a dual purpose: it both encrypts and decrypts. As such, people encrypting mail with a symmetric key would be able to decrypt other mail that used the same symmetric key. While this would protect against anyone without the key, it wouldn’t be very useful as an encrypted e-mail system.

The solution to this is asymmetric cryptography. In asymmetric encryption there are two opposite keys, and a message encrypted with one key can only be decrypted with the other. The two keys are known as a private key, which as the name might suggest is kept private, and a public key, which is broadcast to the world. Each time you want to send an e-mail to someone, you encrypt it with the recipient’s public key.

Asymmetric encryption is also used to perform mail signing. For this, the mail sender encrypts a hash, or mathematical fingerprint, of their file, producing a signature. Hashes are designed so that any small change to the message’s text will produce a different hash value. Anyone reading the mail can then decrypt the signature using the sender’s public key, giving them the original hash value. They can then compute the hash value of the mail they received and compare the two. If the values are the same, the message hasn’t been modified. If they’re not, it has—and we’ll see the uses of this later on.

Making things even more complex, having encryption support isn’t itself enough. To a great extent, you don’t control the things that are in your own inbox. That’s all mail that someone else has sent you. If you want your inbox to contain encrypted mail that only you can read, you need to be sure that people sending you mail are encrypting that mail when they send it. And if you want to be sure that everything in your sent mail folder is encrypted, you’ll need to send other people encrypted mail.

As a result, e-mail encryption is not something you can impose unilaterally. To protect the contents of your account, you need to ensure that everyone you communicate with is in a position to handle encrypted mail—and is willing to use that ability.

Finally, e-mail encryption doesn’t encrypt everything. Certain metadata—including e-mail addresses of both sender and recipient, time and date of sending, and the e-mail’s subject line—is unencrypted. Only the body of the mail (and any attachments) gets protected.

If you’re happy with these constraints, e-mail encryption is for you. Unfortunately, it can be complicated to use.

Cutting through the complexity

Few e-mail programs have PGP encryption features enabled by default. And even if they do, end users must still navigate a series of mazes that are long and confusing. Tasks include generating the key pair that will lock and unlock the communications and storing the private key in a location where no one else can get it. It also requires securely sharing a public key with every single person who wants to send you a private e-mail and securely getting a unique public key from each person you want to send encrypted e-mail to. No wonder most people—reportedly including Glenn Greenwald, the Guardian reporter who exposed aspects of the secret NSA dragnet—need time getting up to speed.

Fortunately, free e-mail encryption programs are available for all major operating systems, and the ability to use them effectively isn’t out of the grasp of average computer users if they know where to look. What follows is a set of step-by-step instructions for using GnuPG, the open-source implementation of the PGP encryption suite, to send and receive encrypted e-mails on machines running Microsoft Windows and Mac OS X.

After that, we’ll show readers how to use a similar crypto standard called S/MIME, which may prove simpler to deploy because it is already built into many desktop and mobile e-mail clients, including Outlook and Thunderbird. (Interested in S/MIME? Skip directly to page three.)

Linux will be touched on only briefly because much of the functionality is already included in various distributions and because many Linux users already have PGP down cold. (Users are invited to provide Linux instructions and screenshots in the comments following this article.)

PGP on Windows

The basic element you’ll need to encrypt mail is software to generate and manage your key pair and make them work with whatever e-mail program you happen to use. On Windows, there’s no shortage of proprietary apps that will do both, with Symantec’s PGP Desktop E-mail being perhaps the best known. There’s nothing wrong with this offering, but it’s almost $200 for a single-user license. This tutorial will instead focus on the open-source Gnu Privacy Guard, which is available for free on Windows, Mac, and Linux platforms.

GnuPG, or simply GPG, is still available mostly as a command-line tool, meaning there’s no graphical interface many end users would feel more comfortable using. Rather than learn a long list of GPG commands, many e-mail users are better off installing graphical implementation of GPG. On Windows, Gpg4win will give you everything you need to generate strongly encrypted messages that can be sent and later decrypted by the intended receiver using standard e-mail programs.

At time of writing, the most recent version of Gpg4win is 2.1.1 and it’s available here. After downloading such a sensitive piece of software you’ll want to confirm the installer hasn’t been tampered with and truly came from Gpg4win rather than a site masquerading as To do that, we’ll need to check the SHA1 checksum for the downloaded file and make sure it matches the hash—a94b292c8944576e06fe8c697d5bb94e365cae25—listed on the Gpg4win download page. For those who prefer a graphical interface, use HashCalc. Install HashCalc and then open the program. In the “data” box, navigate to the folder where the downloaded gpg4win-2.1.1.exe file is located. In our case, since the SHA1 hash calculated by HashCalc matches the SHA1 digest provided on the Gpg4win download page, we have a high degree of confidence the file we’re about to install is genuine.

For readers who prefer command lines, Microsoft’s File Checksum Integrity Verifier may be a better way to check the SHA1 hashes. You’ll need to download and extract the FCIV package and follow the instructions in the readme text file, including making sure the folder containing the FCIV executable file has been added to the system path of Windows. With that out of the way, open a Windows command window and navigate to the folder containing the Gpg4win installer.

Once you’re sure you have the real gpg4win-2.1.1.exe, double-click on the file and click Yes to the User Access Control dialogue. When presented with the Gpg4win installation welcome screen, click Next, and then click Next at the following window to accept the Gpg4win license agreement. The next screen will allow you to choose the precise GPG components you want to install. Make sure you install all available components, including GPA, which is short for the GNU Privacy Assistant. Click Next at the Choose Components screen and again at the Destination and Install Options screens.

At the Install Options screen, makes sure the “start menu” box is checked, click Next, and at the next window click Install. We won’t be using S/MIME for now, so if you see any screens referring to Trustable Root Certificates, you can click the box to skip configuration and click Next. The installation is now complete.

When you click on your Start menu and choose All Programs, you should now see a Gpg4win folder. Highlight it and choose GPA. This is the GNU Privacy Assistant. We’ll use it to generate our key pair, and later we’ll use it to store the public keys of people who will receive our encrypted messages. The first time you open GPA, you’ll see a screen asking if you want to generate a private key. That’s exactly what we want to do, so click “Generate key now.”

In the screens that follow, enter your name and e-mail address. When asked if you want to back up your key, choose “Do it later.” It’s not that this step isn’t important, but we’ll want to back up the key only after we’re satisfied that we’ve done everything correctly. Next, you’ll need to choose a passphrase to protect your key. Your passphrase is like the password protecting an e-mail or Web account. Except rather than preventing an unauthorized person from accessing your account, it prevents the person from using your private key should it ever be lost or stolen. In other words, the password is extremely sensitive. It should have a minimum of nine characters, but 18, 27, or even 36 characters are even better. For more tips on generating a strong password, see Ars Senior Reporter Jon Brodkin’s discussion of master passwords here. When you’re finished, you’ll have generated your first key pair: the public key you will share with other people so they can send encrypted messages that only you can read, and the private key you’ll use to decrypt those messages.

While generating your key, be sure to set an expiration date, rather than allowing it to remain valid forever. This way, keys that new users abandon, lose or never end up using won’t remain on public servers indefinitely. Remember also to backup your private key somewhere that’s extremely safe. Storing it on a USB stick that’s stored in lock box is one suitable method. You may also want to upload your public key to one or more public key servers. These servers give crypto users a way to make their keys available to others and to fetch other people’s public keys.

Now that we’ve generated our first key pair, let’s import the public key of someone else so we’ll have it later when we’re ready to send them our first encrypted e-mail. For this, get someone to give you their public key, preferably in person. It will look something like this:

Version: GnuPG v2.0.17 (MingW32)


Take the public key of a real-world contact and save it to a file named something like key.txt. If you don’t have a real-world contact who has a public key, save the above public key to a file and name it key.txt. Now, with GPA open, choose the “Import” icon, navigate to the disk location of key.txt, highlight the file, and click Open. Congratulations. You’ve just imported your first public key. Don’t get too excited just yet. You’ll need to import a public key for each person you want to send encrypted mail to.

NSA gets early access to zero-day data from Microsoft, others

The National Security Agency (NSA) has used sensitive data on network threats and other classified information as a carrot to gain unprecedented access to information from thousands of companies in technology, telecommunications, financial, and manufacturing companies, according to a report by Michael Riley of Bloomberg. And that data includes information on “zero-day” security threats from Microsoft and other software companies, according to anonymous sources familiar with the data-swapping program.

The NSA isn’t alone in the business of swapping secrets with the corporate world. The FBI, CIA, and Department of Defense (DOD) also have programs enabling them to exchange sensitive government information with corporate “partners” in exchange for access to things like information on cyberattacks, traffic patterns, and other information that relate to network security.

The NSA’s dual role as the security arbiter for many government networks and as point organization for the US government’s offensive cyberwarfare capabilities means that the information it gains from these special relationships could be used to craft exploits to gain access to the computer systems and networks of foreign governments, businesses, and individuals. But it remains unclear just how much of a head start information about bugs actually gives NSA or whether companies actually delay posting fixes on the NSA’s behalf.

Unlocking Windows

According to Bloomberg’s sources, Microsoft provides information about security flaws and other bugs in its software in advance of public releases of fixes. The information provides the government an important early warning about potential attacks on systems, especially DOD networks. The military is Microsoft’s single largest customer; systems on both its unclassified and secret networks (NIPRNET and SIPRNET) use Microsoft software. Microsoft has similar early-access programs for other customers, and it often deploys patches to large customers for testing prior to pushing them out on its monthly “Patch Tuesday” schedule.

But early access to information about bugs also opens up the opportunity for the NSA and DOD’s Cyber Command (both of which are headquartered at Fort Meade, Maryland, and both of which are led by Army General Keith Alexander) to use them for potential “weaponized” exploits.

Antivirus provider McAfee also shares data with the NSA, providing information about threat trends. Michael Fey, McAfee’s worldwide CTO, told Bloomberg that the company also shares information about “cyberattack patterns and vector activity, as well as analysis on the integrity of software, system vulnerabilities, and hacker group activity.”

Metadata on targets

Information about bugs from software providers could be used in a very targeted way by the NSA based on the metadata collected from its network monitoring operations. According to the Bloomberg report, US telecommunications companies willingly give access to the NSA at overseas points of presence that would require a FISA warrant in the US, allowing them to collect information that can be pieced together to build profiles of individual systems from the traffic they send over the Internet. That includes information passed in Web requests and other application traffic that reveals the OS and browser versions the systems are using, the version of Java that they have installed, and other information that could be used to target them with exploits.

As a result, the NSA can turn to its in-house exploit-building capabilities or turn to suppliers who are paid for zero-day exploits to create specially targeted packages of attacks to go after systems of interest, much in the way it reportedly contributed to the development of Stuxnet and Flame to attack systems connected to Iranian nuclear research. The result, while similar in ways to the capabilities demonstrated by the Chinese People’s Liberation Army’s cyberwarfare unit, could be a much more sophisticated offensive capability aided and abetted by the very companies that wrote the software targeted in the first place.

These revelations could have severe repercussions for the US software and cloud computing services industries. US surveillance laws have already been cause for concern and outrage among European customers. And the implications of the USA PATRIOT Act have caused problems for software-as-a-service providers with US-based data centers in Canada as well. With the NSA’s relationship with software companies (and especially with Microsoft) now out in the open, more foreign governments may follow the route that China has taken in developing their own operating system for government use.

Why password-only authentication is passé

Gartner’s recent magic quadrant for IT Service Support Management included no vendors as leaders or innovators. Learn why and how ITinvolve is delivering an innovative service desk solution that empowers IT staff through social collaboration and visualization to improve incident analysis and triage to speed incident resolution time.

Read now »

Black Hat USA 2013 Reveals Turbo Talks On Top Topics

[NOTE: Black Hat and Dark Reading are both part of UBM Tech. As the key July 27th-August 1st information security event in Las Vegas approaches, we'll be sharing information about the show directly from its creators here on Dark Reading.]

Click here for more of Dark Reading’s Black Hat articles.

There will be a lot of new things to see and do at Black Hat this year — but one of the concepts we’re bringing is a blast from the past: the Turbo Talk. We’re particularly excited about the no-nonsense, no-stories, no-fluff 25-minute format, during which you’ll see content from almost every corner of security space in these high-speed sessions. For a preview of the breadth you’ll see, check out the talks below.

– Clickjacking attacks are no longer new, and there are claims of adequate protection. However, as you’ll see, there is still a lot more work to be done. In “Clickjacking Revisited: A Perceptual View of UI Security,” Devdatta Akhawe will revisit UI security attacks from a perceptual POV and highlight novel new attacks made possible through a thorough understanding of human perception. Some of these are 100% successful, yet still only scratch the surface of what’s possible. Defending against such attacks will be nearly as complex as human perception itself.

– Users demand seamless mobile app experiences, but this comes at the expense of security, with fewer forms of checking and validation built into the APIs that facilitate the magic. Of course, this leaves the APIs wide open to exploitation, as Daniel Peck will show in his Turbo Talk, “Abusing Web APIs Through Scripted Android Applications.” He’ll use JRuby to run code from targeted APKs in an easily scriptable way, and show how to use the Burp suite to probe APIs for weaknesses, wrapping up with several case studies that demonstrate popular apps being seriously compromised.

– Big data is not just a buzzword, despite its current overexposure in the media. But how can it be used to improve the security posture of an application? In the Turbo Talk “Big Data for Web Application Security,” Mike Arpaia will explore the pros and cons of big data as they pertain to app security. One of the most important steps is separating the problems that can and should be solved by big data from those that are not so applicable. Upon establishing an understanding of the proper problem domain, his talk will finish with several specific examples of how one security team uses big data daily to solve hard, interesting problems and provide a safer user experience.

More information about Black Hat USA 2013, which has a rapidly growing set of Briefings talks, as well as a comprehensive set of two- and four-day trainings, is available now — and online registration, at a reduced rate from onsite, is open until July 24th.