Software developer, racing fan
631 stories
·
21 followers

A randomly generated, totally novel enzyme rescues mutant bacteria

1 Comment and 4 Shares

Proteins are chains of amino acids, and each link in the chain can hold any one of the 20 amino acids that life relies on. If you were to pick each link at random, the number of possible proteins ends up reaching astronomical levels pretty fast.

So how does life ever end up evolving entirely new genes? One lab has been answering that question by making its own proteins from scratch.

Way back in 2016, the same lab figured out that new, random proteins can perform essential functions. And those new proteins were really new. They were generated by scientists who made amino acid sequences at random and then kept any that folded into the stable helical structures commonly found in proteins. These proteins were then screened to see if any could rescue E. coli that were missing a gene essential to survival.

Three proteins succeeded, which indicates that they compensated for the missing gene’s essential function. But they did not do so by acting as a catalyst (meaning they weren’t enzymes).

In a recent paper in Nature Chemical Biology, however, the lab is reporting that one newer protein has acted as a catalyst.

The E. coli used in these experiments lacked the ability to use the iron provided in their medium because of the deletion of a gene that normally provides this function. So the experiments were a test to see if a randomly generated protein would be able to catalyze reactions with iron. The three proteins that had passed this test in 2016, however, simply altered gene activity so that the iron became available through other pathways.

To generate the recent enzyme, the researchers took one of the proteins that already rescued the mutant E. coli and subjected it to random mutagenesis. This ultimately produced an iron-releasing enzyme. Just like the natural enzyme, this synthetic one has a chiral preference for its substrate, meaning that it can only work with one structural form of the molecule and not its mirror image.

But its similarities to the native enzyme end there. The amino acid sequence of this synthetic enzyme bears no relation to the bacterial enzyme it replaces. This made figuring out how it works very difficult. Usually this is done by comparing the protein in question to similar ones from other species: clearly not an option here. The researchers also tried to crystallize it, which would let them figure out its structure, but no deal.

So they started mutating amino acids one by one to see which mutations rendered the enzyme inactive. This told them that the original amino acid that had been replaced must be important. This method revealed five particular amino acids that comprise the likely active site. When software that predicts protein structures was given the protein’s amino acid sequence and told that these five had to be close together, it spit out one structure that seemed the most likely.

And just like the amino acid sequence, the structure looked so totally different from the native enzyme’s that the researchers think the enzyme must work through a completely new mechanism.

The scientists made this enzyme not using any kind of rational design or strategy; they were just tooling around with random amino acid sequences and having bacteria determine if they could do what they wanted. In a completely contrived case of convergent evolution, the researchers made a protein that does not share a sequence, structure, or even mechanism with the one evolution hit upon—yet it performs the same function. A thousand-fold slower than the natural one, but it might get better if given further time to evolve.

Nature Chemical Biology, 2018. DOI: 10.1038/NCHEMBIO.2550 (About DOIs).

Read the whole story
vitormazzi
5 hours ago
reply
Brasil
jepler
9 hours ago
reply
Earth, Sol system, Western spiral arm
Share this story
Delete
1 public comment
satadru
11 hours ago
reply
Wow.
New York, NY

What’s this big button?

3 Shares

Read the whole story
vitormazzi
3 days ago
reply
Brasil
Share this story
Delete

Some thoughts on security after ten years of qmail 1.0

2 Shares

Some thoughts on security after ten years of qmail 1.0 Bernstein, 2007

I find security much more important than speed. We need invulnerable software systems, and we need them today, even if they are ten times slower than our current systems. Tomorrow we can start working on making them faster.

That was written by Daniel Bernstein, over ten years ago. And it rings just as true today as it did on the day he wrote it — especially with Meltdown and Spectre hanging over us. Among his many accomplishments, Daniel Bernstein is the author of qmail. Bernstein created qmail because he was fed us with all of the security vulnerabilities in sendmail. Ten years after the launch of qmail 1.0, and at a time when more than a million of the Internet’s SMTP servers ran either qmail or netqmail, only four known bugs had been found in the qmail 1.0 releases, and no security issues. This paper lays out the principles which made this possible. (With thanks to Victor Yodaiken who pointed the paper out to me last year).

How was qmail engineered to achieve its unprecedented level of security? What did qmail do well from a security perspective, and what could it have done better? How can we build other software projects with enough confidence to issue comparable security guarantees?

Bernstein offers three answers to these questions, and also warns of three distractions: things that we believe are making things better, but may actually be making things worse. It seems a good time to revisit them. Let’s get the distractions out of the way first.

Three distractions from writing secure software

The first distraction is ‘chasing attackers’ – ending up in a reactive mode whereby you continually modify the software to prevent disclosed attacks. That’s not to say you shouldn’t patch against discovered attacks, but this is a very different thing to writing secure software in the first place:

For many people, “security” consists of observing current attacks and changing something — anything! — to make those attacks fail… the changes do nothing to fix the software engineering deficiencies that led to the security holes being produced in the first place. If we define success as stopping yesterday’s attacks, rather than as making progress towards stopping all possible attacks, then we shouldn’t be surprised that our systems remain vulnerable to tomorrow’s attacks.. (Emphasis mine).

The second distraction is more surprising: it’s the principle of least privilege! This states that every program and user of the system should operate using the least set of privileges necessary to complete the job. Surely that’s a good thing? We don’t to give power where it doesn’t belong. But the reason Bernstein calls it a distraction is that assigning least privilege can lull us into a false sense of security:

Minimizing privilege is not the same as minimizing the amount of trusted code, and does not move us any closer to a secure computer system… The defining feature of untrusted code is that it cannot violate the user’s security requirements.

I’m not sure I agree that least privilege does not move us any closer to a secure computer system, but I do buy the overall argument here. My opinion might carry more weight though if I had also managed to write sophisticated software deployed on millions of systems with only four known bugs and no security issues over ten years of operation!

The third distraction is very topical: speed. We know about the wasted time and programming effort through premature optimisation, but our veneration of speed has other more subtle costs. It causes us to reject out of hand design options that would be more secure (for example, starting a new process to handle a task) — they don’t even get tried.

Anyone attempting to improve programming languages, program architectures, system architectures etc. has to overcome a similar hurdle. Surely some programmer who tries (or considers) the improvement will encounter (or imagine) some slowdown in some context, and will then accuse the improvement of being “too slow” — a marketing disaster… But I find security much more important than speed.

Make it secure first, then work on making it faster.

How can we make our software more secure?

The first answer, and surely the most obvious answer, is to reduce the bug rate.

Security holes are bugs in our software. If we can reduce or eliminate bugs (across the board) then we should also reduce or eliminate security holes.

Getting the bug rate down will help, but notice that it’s a rate: bugs per line of code. This suggests a second answer: reduce the amount of code in the system:

Software-engineering processes vary not only in the number of bugs in a given volume of code, but also in the volume of code used to provide features that the user wants… we can meta-engineer processes that do the job with lower volumes of code.

Note here the importance of fewer lines of code per required feature. As we saw last year when looking at safety-related incidents (‘Analyzing software requirements errors’, ‘The role of software in spacecraft accidents’), many problems occur due to omissions – a failure to do things that you really should have done. And that requires more code, not less.

If we just stop at lowering the bug rate and reducing the number of lines of code, we should have fewer security holes. Unfortunately, the end result of just one exploited security hole is the same in software with only one hole as it is in software with multitudes. There’s probably some exponential curve that could be drawn plotting software engineering effort against number of bugs, whereby early gains come relatively easily, but chasing out the last few becomes prohibitively expensive. (I’m reminded of, for example, ‘An empirical study on the correctness of formally verified distributed systems.’) Now the effort we have to make in reaching the highest levels of assurance must in some way be a function of the size of the code to be assured. So it’s desirable to eliminate the need to reach this level of assurance in as many places as possible. Thus,

The third answer is to reduce the amount of trusted code in the computer system. We can architect computer systems to place most of the code into untrusted prisons. “Untrusted” means that code in these prisons — no what the code does, no matter how badly it behaves, no matter how many bugs it has — cannot violate the user’s security requirements… There is a pleasant synergy between eliminating trusted code and eliminating bugs: we can afford relatively expensive techniques to eliminate the bugs in trusted code, simply because the volume of code is smaller.

Meta-engineering to reduce bugs

The section on techniques for eliminating bugs contains a paragraph worth mediating on for a while. I strongly suspect the real secret to Bernstein’s success with qmail is given to us right here:

For many years I have been systematically identifying error-prone programming habits — by reviewing the literature, analyzing other people’s mistakes, and analyzing my own mistakes — and redesigning my programming environment to eliminate those habits.

Some of the techniques recommended include ensuring data flow is explicit (designing large portions of qmail to run in separate processes connected through pipelines made much of qmail’s internal data flow easier to see for example), simplifying integer semantics (using big integers and regular arithmetic rather than the conventional modular arithmetic), and factoring code in order to make it easier to test error cases.

Most programming environments are meta-engineered to make typical software easier to write. They should instead be meta-engineered to make incorrect software harder to write.

Eliminating code

To reduce code volume we can change programming languages and structures, and reuse existing facilities where possible.

When I wrote qmail I rejected many languages as being much more painful than C for the end user to compile and use. I was inexplicably blind to the possibility of writing code in a better language and then using an automated translator to convert the code into C as a distribution language.

Here’s an interesting example of design enabling reuse: instead of implementing its own permissions checks to see whether a user has permission to read a file, qmail simply starts a delivery program under the right uid. This means there’s an extra process involved (see the earlier discussion on the ‘speed’ distraction), but avoids a lot of other hassle.

Thinking about the TCB

Programmers writing word processors and music players generally don’t worry about security. But users expect those programs to be able to handle files received by email or downloaded from the web. Some of those files are prepared by attackers. Often the programs have bugs that can be exploited by the attackers…

Bernstein offers as an example of better practice moving the processing of user input (in this case for converting jpegs to bitmaps) from inside the same process, to what is essentially a locked-down container.

I don’t know exactly how small the ultimate TCB will be, but I’m looking forward to finding out. Of course, we still need to eliminate bugs from the code that remains!

Meltdown and Spectre should give us cause to reflect on what we’re doing and where we’re heading. If the result of that is a little more Bernstein-style meta-engineering to improve the security of the software produced by our processes, then maybe some good can come out of them after all.



Read the whole story
vitormazzi
3 days ago
reply
Brasil
Share this story
Delete

Finding a CPU Design Bug in the Xbox 360

2 Shares

The recent reveal of Meltdown and Spectre reminded me of the time I found a related design bug in the Xbox 360 CPU – a newly added instruction whose mere existence was dangerous.

Back in 2005 I was the Xbox 360 CPU guy. I lived and breathed that chip. I still have a 30-cm CPU wafer on my wall, and a four-foot poster of the CPU’s layout. I spent so much time understanding how that CPU’s pipelines worked that when I was asked to investigate some impossible crashes I was able to intuit how a design bug must be their cause. But first, some background…

Annotated Xbox 360 dieThe Xbox 360 CPU is a three-core PowerPC chip made by IBM. The three cores sit in three separate quadrants with the fourth quadrant containing a 1-MB L2 cache – you can see the different components, in the picture at right and on my CPU wafer. Each core has a 32-KB instruction cache and a 32-KB data cache.

Trivia: Core 0 was closer to the L2 cache and had measurably lower L2 latencies.

The Xbox 360 CPU had high latencies for everything, with memory latencies being particularly bad. And, the 1-MB L2 cache (all that could fit) was pretty small for a three-core CPU. So, conserving space in the L2 cache in order to minimize cache misses was important.

CPU caches improve performance due to spatial and temporal locality. Spatial locality means that if you’ve used one byte of data then you’ll probably use other nearby bytes of data soon. Temporal locality means that if you’ve used some memory then you will probably use it again in the near future.

But sometimes temporal locality doesn’t actually happen. If you are processing a large array of data once-per-frame then it may be trivially provable that it will all be gone from the L2 cache by the time you need it again. You still want that data in the L1 cache so that you can benefit from spatial locality, but having it consuming valuable space in the L2 cache just means it will evict other data, perhaps slowing down the other two cores.

Normally this is unavoidable. The memory coherency mechanism of our PowerPC CPU required that all data in the L1 caches also be in the L2 cache. The MESI protocol used for memory coherency requires that when one core writes to a cache line that any other cores with a copy of the same cache line need to discard it – and the L2 cache was responsible for keeping track of which L1 caches were caching which addresses.

About 40 cores on my wafer, L2 caches visibleBut, the CPU was for a video game console and performance trumped all so a new instruction was added – xdcbt. The normal PowerPC dcbt instruction was a typical prefetch instruction. The xdcbt instruction was an extended prefetch instruction that fetched straight from memory to the L1 d-cache, skipping L2. This meant that memory coherency was no longer guaranteed, but hey, we’re video game programmers, we know what we’re doing, it will be fine.

Oops.

I wrote a widely-used Xbox 360 memory copy routine that optionally used xdcbt. Prefetching the source data was crucial for performance and normally it would use dcbt but pass in the PREFETCH_EX flag and it would prefetch with xdcbt. This was not well-thought-out. The prefetching was basically:

if (flags & PREFETCH_EX)
  __xdcbt(src+offset);
else
  __dcbt(src+offset);

A game developer who was using this function reported weird crashes – heap corruption crashes, but the heap structures in the memory dumps looked normal. After staring at the crash dumps for awhile I realized what a mistake I had made.

Memory that is prefetched with xdcbt is toxic. If it is written by another core before being flushed from L1 then two cores have different views of memory and there is no guarantee their views will ever converge. The Xbox 360 cache lines were 128 bytes and my copy routine’s prefetching went right to the end of the source memory, meaning that xdcbt was applied to some cache lines whose latter portions were part of adjacent data structures. Typically this was heap metadata – at least that’s where we saw the crashes. The incoherent core saw stale data (despite careful use of locks), and crashed, but the crash dump wrote out the actual contents of RAM so that we couldn’t see what happened.

So, the only safe way to use xdcbt was to be very careful not to prefetch even a single byte beyond the end of the buffer. I fixed my memory copy routine to avoid prefetching too far, but while waiting for the fix the game developer stopped passing the PREFETCH_EX flag and the crashes went away.

The real bug

So far so normal, right? Cocky game developers play with fire, fly too close to the sun, marry their mothers, and a game console almost misses Christmas.

But, we caught it in time, we got away with it, and we were all set to ship the games and the console and go home happy.

And then the same game started crashing again.

The symptoms were identical. Except that the game was no longer using the xdcbt instruction. I could step through the code and see that. We had a serious problem.

I used the ancient debugging technique of staring at my screen with a blank mind, let the CPU pipelines fill my subconscious, and I suddenly realized the problem. A quick email to IBM confirmed my suspicion about a subtle internal CPU detail that I had never thought about before. And it’s the same culprit behind Meltdown and Spectre.

The Xbox 360 CPU is an in-order CPU. It’s pretty simple really, relying on its high frequency (not as high as hoped despite 10 FO4) for performance. But it does have a branch predictor – its very long pipelines make that necessary. Here’s a publicly shared CPU pipeline diagram I made (my cycle-accurate version is NDA only, but looky here) that shows all of the pipelines:

image

You can see the branch predictor, and you can see that the pipelines are very long (wide on the diagram) – plenty long enough for mispredicted instructions to get up to speed, even with in-order processing.

So, the branch predictor makes a prediction and the predicted instructions are fetched, decoded, and executed – but not retired until the prediction is known to be correct. Sound familiar? The realization I had – it was new to me at the time – was what it meant to speculatively execute a prefetch. The latencies were long, so it was important to get the prefetch transaction on the bus as soon as possible, and once a prefetch had been initiated there was no way to cancel it. So a speculatively-executed xdcbt was identical to a real xdcbt! (a speculatively-executed load instruction was just a prefetch, FWIW).

And that was the problem – the branch predictor would sometimes cause xdcbt instructions to be speculatively executed and that was just as bad as really executing them. One of my coworkers (thanks Tracy!) suggested a clever test to verify this – replace every xdcbt in the game with a breakpoint. This achieved two things:

  1. The breakpoints were not hit, thus proving that the game was not executing xdcbt instructions.
  2. The crashes went away.

I knew that would be the result and yet it was still amazing. All these years later, and even after reading about Meltdown, it’s still nerdy cool to see solid proof that instructions that were not executed were causing crashes.

The branch predictor realization made it clear that this instruction was too dangerous to have anywhere in the code segment of any game – controlling when an instruction might be speculatively executed is too difficult. The branch predictor for indirect branches could, theoretically, predict any address, so there was no “safe place” to put an xdcbt instruction. And, if speculatively executed it would happily do an extended prefetch of whatever memory the specified registers happened to randomly contain. It was possible to reduce the risk, but not eliminate it, and it just wasn’t worth it. While Xbox 360 architecture discussions continue to mention the instruction I doubt that any games ever shipped with it.

I mentioned this once during a job interview – “describe the toughest bug you’ve had to investigate” – and the interviewer’s reaction was “yeah, we hit something similar on the Alpha processor”. The more things change…

Thanks to Michael for some editing.

Postscript

How can a branch that is never taken be predicted to be taken? Easy. Branch predictors don’t maintain perfect history for every branch in the executable – that would be impractical. Instead simple branch predictors typically squish together a bunch of address bits, maybe some branch history bits as well, and index into an array of two-bit entries. Thus, the branch predict result is affected by other, unrelated branches, leading to sometimes spurious predictions. But it’s okay, because it’s “just a prediction” and it doesn’t need to be right.

Discussions of this post can be found on hacker news, r/programming, r/emulation, and twitter.

A somewhat related (Xbox, caches) bug was discussed a few years ago here.










Read the whole story
vitormazzi
9 days ago
reply
Brasil
jepler
12 days ago
reply
Earth, Sol system, Western spiral arm
Share this story
Delete

Security Vulnerabilities in Star Wars

1 Share

A fun video describing some of the many Federation security vulnerabilities in the first Star Wars movie.

Happy New Year, everyone.

Read the whole story
vitormazzi
18 days ago
reply
Brasil
Share this story
Delete

34C3: The First Day is a Doozy

1 Comment and 2 Shares

It’s 5 pm, the sun is slowly setting on the Leipzig conference center, and although we’re only halfway through the first day, there’s a ton that you should see. We’ll report some more on the culture of the con later — for now here’s just the hacks.

Electric Car Charging Stations: Spoofing and Reflashing

Electric autos are the future, right? Well, for now we need to figure out how to charge them. All across Germany, charging stations are popping up like dandelions. How do they work? Are they secure? [Mathias Dalheimer] bought a couple loading stations, built himself a car simulator, spoofed some NFC cards, and found that the whole thing was full of holes. The talk is in German, and doesn’t yet have subtitles, but the takeaways are that it’s trivial to offload charges to other people by cloning their NFC cards. Worse, the loading stations are Internet accessible, and of course remotely-controllable. With physical access, and a screwdriver, the entire station can be reflashed and then the game’s up. [Mathias] ended his talk with a call for community involvement in shaping the next generation of loading-station protocols and software, because after all, this is infrastructure that we’d all like to use in the future.

Open-Source Silicon: Verifying the RISC-V Spec

If we were to pick one of the largest developments in the open-source hardware industry this year, we’d call 2017 the year of open silicon. In particular the open RISC-V processor came out in hardware that you can play around with now. In ten years, when we’re all running open-silicon “Arduinos”, remember this time. And if you haven’t been watching [Clifford Wolf], you might have missed that he wrote a 3D modelling software called openSCAD or a free FPGA toolchain, project Icestorm.

Anyway, [Clifford] has turned his attention to the RISC-V architecture. He’s been working on formally verifying that a hardware design meets the RISC-V specification. In contrast to simulation, where you run the hardware from a bunch of starting values, and see if it ends up in an undesired state, formal verification proves that the hardware design doesn’t do the wrong things, at least for a certain number of cycles after startup.

All of this is nice, but it’s not worth doing unless it’s finding bugs. And he’s found bugs in nearly every RISC-V implementation, and also in the actual English-language specification as well. A free and open formal verification suite for an open processor specification eases the way for all future developers. This may seem abstruse at the moment, but it’s paving the way for a revolution.

Robotic Vacuum Cleaners: Rooting the Xiaomi Blinds the Cloud

The Xiaomi robotic vacuum cleaner would certainly make a great platform for hacker explorations: it has a LIDAR, batteries, decent motors, electronic compass, ultrasonic “radar”, and much more. [Dennis Giese] and [Daniel AW] got root on the device, opening it up completely. Watch the preliminary version of the talk here. They dumped the MMC flash by shorting pins to ground with a piece of aluminum foil, and then fooled the update procedure into accepting their own image, and the game was over. They then went on to work around all of Xiaomi’s cloud services, allowing entirely self-contained operation if you’d like.

Interestingly enough, [Dennis] and [Daniel] found a reference to a tcpdump command that would eavesdrop on all network traffic inside your WLAN. It didn’t seem to be running, because there were no pcap files to be found. It could be a left-over from development, or it could be something more sinister. Xiaomi has just been featured on Hackaday for their nightlight that sends ridiculous amounts of data home. In this light (tee-hee) it’s not entirely surprising to find that their vacuum is doing the same thing — draw your own conclusions.

The Intel Management Engine, Again

One of the bigger vulnerabilities disclosed this year was the crack of the Intel Management Engine. It’s a hidden computer inside your computer, which doubles as the root of trust for basically everything else. If it could be compromised, it would be the end. It has always been shrouded in secrecy, and that’s made everyone nervous. [Maxim Goryachy], [Mark Ermolov], and [Dmitry Sklyarov] managed to attack it via a JTAG port. If you want to get into the hack in detail, this talk is for you. This hack was a very big chink in the armor of obscurity surrounding the IME. It will be interesting to see what next year brings.

What’s One Bit Between Friends?

In this technical yet accessible talk, [Filippo Valsorda] walks us through a bug he found in an encryption algorithm deep inside a Go library, and how he used a one-bit error that occurs around one time in a billion to extract the entire 256-bit secure key. By carefully crafting a public key, he can use the extremely infrequent error to sequentially unravel the entire secret. The particular bug that he found is fixed, of course, but the method of deploying tons of computing power to ferret out keys just shows how far you can push even the tiniest oracle. This talk demonstrates very explicitly that even the smallest bug is too big.

Networks Before the Internet: BBS Memory Lane

[LaForge] is an open-source radio hacker. If you’ve done any SDR work, you may have used drivers from his Osmocom project. But like the rest of us, he was a young nerdling once. And when he was young, the BBS scene was the big deal. In this non-technical talk, he takes a trip down memory lane and looks at the tech that underlies the BBS era.

What’s Next?

If you’re wondering where we’re going to be tonight, check out the schedule and watch live streams. In particular, there’s a talk on the state of computing in North Korea, tweaking FitBits, cracking WPA2, and a talk that promises to be the “Ultimate Apollo Guidance Computer Talk”. And then we’ll take a nap, and do it all again tomorrow.

We can’t see it all. Let us know what you’ve seen, and what we must.


Filed under: cons, Featured, News











Read the whole story
jepler
24 days ago
reply
One talk was about "a bug he found in an encryption algorithm deep inside a Go library, and how he used a one-bit error that occurs around one time in a billion to extract the entire 256-bit secure key".

I swear I remember reading a paper which showed that a hypothetical multiplier bug that affected 1 result out of 2^128 pairs of 64 bit inputs that could leak keys in just this way. False memory, or did someone actually implant the hypothetical bug in a real world product? Eek.

aha here's the paper, or one that is nearby the paper I remember. 9 years old. http://www.cs.technion.ac.il/~yanivca/BugAttacks.pdf
Earth, Sol system, Western spiral arm
vitormazzi
23 days ago
reply
Brasil
Share this story
Delete
Next Page of Stories