Software developer, racing fan
763 stories
·
42 followers

beccap: “But the 8-hour workday is too profitable for big business, not because of the amount of...

1 Comment and 11 Shares

beccap:

“But the 8-hour workday is too profitable for big business, not because of the amount of work people get done in eight hours (the average office worker gets less than three hours of actual work done in 8 hours) but because it makes for such a purchase-happy public. Keeping free time scarce means people pay a lot more for convenience, gratification, and any other relief they can buy. It keeps them watching television, and its commercials. It keeps them unambitious outside of work. We’ve been led into a culture that has been engineered to leave us tired, hungry for indulgence, willing to pay a lot for convenience and entertainment, and most importantly, vaguely dissatisfied with our lives so that we continue wanting things we don’t have. We buy so much because it always seems like something is still missing.”

Your Lifestyle Has Already Been Designed

Read the whole story
vitormazzi
12 days ago
reply
Brasil
luizirber
14 days ago
reply
Davis, CA
Share this story
Delete
1 public comment
shanel
12 days ago
reply
Checks out...
New York, New York

DICA DO DIA

2 Shares
A Floresta de Camboatá: santuário ecológico (foto: Gustavo Pedro/piauí)

RIO (de volta) – Essencial a excepcional matéria de Roberto Kaz na última edição da revista “piauí” sobre a Floresta de Camboatá, onde incautos seguem acreditando que será erguido um autódromo no Rio. Ele também traça um perfil do fabuloso JR Pereira e seu mirabolante Rio Motorpark, o consórcio que “venceu” a licitação para ocupar a área.

Se depois de ler isso alguém ainda achar que essa empreitada é factível, convido os crentes a visitar as bordas da nossa linda e plana Terra. Pago as passagens.

The post DICA DO DIA appeared first on Blog do Flavio Gomes.

Read the whole story
vitormazzi
20 days ago
reply
Brasil
gabrielgeraldo
20 days ago
reply
São Paulo
Share this story
Delete

14-08-2018

2 Shares

Read the whole story
vitormazzi
28 days ago
reply
Brasil
luizirber
28 days ago
reply
Davis, CA
Share this story
Delete

Facebook's Pentagon Papers Moment

1 Comment and 2 Shares
A legal case between FACEB...err, Facebook and app developer Six4Three had resulted in a massive corpus of documents released in discovery and sealed by the judge in the matter - which would then receive more time in the spotlight when British MP Damien Collins used his legal authority to seize control of the documents. Wednesday, investigative reporter Duncan Campbell released his copy of the full leaked corpus of documents.

The documents reveal much about Facebook's use of user data, with some particular points of note:

*Facebook wielded its control over user data to hobble rivals like YouTube, Twitter, and Amazon. The company benefited its friends even as it took aggressive action to block rival companies' access – while framing its actions as necessary to protect user privacy.

*Facebook executives quietly planned a data-policy "switcharoo." "Facebook began cutting off access to user data for app developers from 2012 to squash potential rivals while presenting the move to the general public as a boon for user privacy," Reuters reported on Wednesday, citing the leaked documents.

*Facebook considered charging companies to access user data. Documents made public in late 2018 revealed that from 2012 to 2014, Facebook was contemplating forcing companies to pay to access users' data. (It didn't ultimately follow through with the plan.)

*Facebook whitelisted certain companies to allow them more extensive access to user data, even after it locked down its developer platform throughout 2014 and 2015.TechCrunch reported in December that it "is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not."

*Facebook planned to spy on the locations of Android users. Citing the documents, Computer Weekly reported in February that "Facebook planned to use its Android app to track the location of its customers and to allow advertisers to send political advertising and invites to dating sites to 'single' people."
Read the whole story
jepler
32 days ago
reply
"Win when you can, lose if you must, but ALWAYS ALWAYS CHEAT" (and lie)
Earth, Sol system, Western spiral arm
vitormazzi
32 days ago
reply
Brasil
Share this story
Delete

An analysis of performance evolution of Linux’s core operations

2 Shares

An analysis of performance evolution of Linux’s core operations Ren et al., SOSP’19

I was drawn in by the headline results here:

This paper presents an analysis of how Linux’s performance has evolved over the past seven years… To our surprise, the study shows that the performance of many core operations has worsened or fluctuated significantly over the years.

When you get into the details I found it hard to come away with any strongly actionable takeaways though. Perhaps the most interesting lesson/reminder is this: it takes a lot of effort to tune a Linux kernel. For example:

  • “Red Hat and Suse normally required 6-18 months to optimise the performance an an upstream Linux kernel before it can be released as an enterprise distribution”, and
  • “Google’s data center kernel is carefully performance tuned for their workloads. This task is carried out by a team of over 100 engineers, and for each new kernel, the effort can also take 6-18 months.”

Meanwhile, Linux releases a new kernel every 2-3 months, with between 13,000 and 18,000 commits per release.

Clearly, performance comes at a high cost, and unfortunately, this cost is difficult to get around. Most Linux users cannot afford the amount of resource large enterprises like Google put into custom Linux performance tuning…

For Google of course, there’s an economy of scale that makes all that effort worth it. For the rest of us, if you really need that extra performance (maybe what you get out-of-the-box or with minimal tuning is good enough for your use case) then you can upgrade hardware and/or pay for a commercial license of a tuned distributed (RHEL).

A second takeaway is this: security has a cost!

Measuring the kernel

The authors selected a set of diverse application workloads, as shown in the table below, and analysed their execution to find out the system call frequency and total execution time.

A micro-benchmark suite, LEBench was then built around tee system calls responsible for most of the time spent in the kernel.

On the exact same hardware, the benchmark suite is then used to test 36 Linux release versions from 3.0 to 4.2.0.

Headline results

All kernel operations are slower than they were four years ago (version 4.0), except for big-write and big-munmap. The majority (75%) of the kernel operations are slower than seven years ago (version 3.0). Many of the slowdowns are substantial…

The following figure shows the relative speed-up/slow-down across the benchmarked calls (y-axis) across releases (x-axis). The general pattern to my eye is that things were getting better / staying stable until around v4.8-v.14, and after that performance starts to degrade noticeably.

Analysis

We identify 11 kernel changes that explain the significant performance fluctuations as well as more steady sources of overhead.

These changes fall into three main groups:

  1. (4) Security enhancements (e.g. to protect against Meltdown and Spectre) ).
  2. (4) New features introduced into the kernel that came with a performance hit in some scenarios
  3. (3) Configuration changes

In terms of the maximum combined slowdown though, it’s not the Meltdown and Spectre patches that cause the biggest slowdowns (146% cf. a 4.0 baseline), but missing or misconfigured configuration changes (171%). New features also contribute a combined maximum slowdown of 167%. If you drill down into the new features though, some of these are arguably security related too -e.g. the cgroup memory controller change for containers.

The following chart shows the impact of these 11 changes across the set of system calls under study.

It’s possible to avoid the overheads from these 11 changes if you want to, but that doesn’t feel like a path to recommend for most of them!

With little effort, Linux users can avoid most of the performance degradation from the identified root causes by actively reconfiguring their systems. In fact, 8 out of 11 root causes can be disabled through configuration, and the other 3 can be disabled through simple patches.

Testing against real-world workloads (Redis, Apache, Nginx), disabling the 11 root causes results in maximum performance improvements in these three applications of 56%, 33%, and 34% respectively. On closer examination, 88% of the slowdowns experienced by these applications can be tied back to just four of the eleven changes: forced context tracking (a configuration error), kernel page table isolation (Meltdown protection), missing CPU idle power states (in the configuration bucket, but really due to older kernel versions lacking specifications for the newer hardware used in the benchmarking, which is kind of fair game?), and avoidance of indirect jump speculation (Spectre).

Security related root causes

  1. Kernel page table isolation (KPTI), introduced to protect against Meltdown. The average slowdown caused by KPTI across all microbenchmarks in 22%, with recv and read tests seeing 63% and 59% slowdowns. Before KPTI, the kernel and user space used one shared page table, with KPTI they have separate page tables. The main source of introduced overhead is swapping the page table pointers on every kernel entry and exit, together with a TLB flush. The flush can be avoided on processors with the process-context identifier (PCID) feature, but even this isn’t enough to avoid the reported slowdowns.
  2. Avoidance of indirect branch speculation (the Retpoline patch) to protect against Spectre. This causes average slowdowns of 66% across the select, poll, and epoll tests. The more indirect jumps and calls in a test, the worse the overhead. The authors found that turning each indirect call here into a switch statement (a direct conditional branch) alleviates the performance overhead. ![][FIG6]
  3. SLAB freelist randomization, which increases the difficulty of exploiting buffer overflow bugs. By randomising the order of free spaces for objects in a SLAB, there is a notable overhead (37-41%) when sequentially accessing large amounts of memory.
  4. The hardened usercopy patch, which validates kernel pointers used when copying data between userspace and the kernel.

New-feature related root causes

  1. The ‘fault-around’ feature aims to reduce the number of minor page faults, but introduces a 54% slowdown in the ‘big-pagefault’ test where its access pattern assumptions do not hold.
  2. The cgroup memory controller was introduced in v2.6 and is a key building block of containerization technologies. It adds overhead to tests that exercise the kernel memory controller, even when cgroups aren’t being used. It took 6.5 years (until v 3.17) for this overhead to begin to be optimised. Before those optimisations, slowdowns of up to 81% were observed, afterwards this was reduced to 9%.
  3. Transparent huge pages (THP) have been in and out and in and out again as a feature enabled by default. THP automatically adjusts the default page size and allocates 2MB (huge) pages, but can fall back to 4KB pages under memory pressure. Currently it is disabled by default. In what seems to be a case of damned-if-you-do, damned-if-you-don’t, without THP some tests are up to 83% slower.
  4. Userspace page fault handling allows a userspace process to handle page faults for a specific memory region. In most cases its overhead is negligible, but the big-fork test sees a 4% slowdown with it.

Configuration related root causes

  1. Forced context tracking was released into the kernel by mistake in versions 3.10 and 3.12-15 (it’s a debugging feature using in the development of the reduced scheduling clock-ticks – RSCT – feature). It was enabled in several released Ubuntu kernels due to misconfiguration. Forced context tracking was finally switched off 11 months after the initial misconfiguration. It slowed down all of the 28 tests by at least 50%, 7 of them by more than 100%.
  2. The TLB layout change patch was introduced in v3.14, and enables Linux to recognise the size of the second-level TLB on newer Intel processors. It’s on the list as a configuration related problem since there was a six-month period when the earliest Haswell processors were released but the patch wasn’t, resulting in a slowdown running on those processors.
  3. The CPU idle power-state support patch similarly informs the kernel about fine-grained power-saving states available on Intel processors. It’s on the list because it wasn’t backported to the LTS kernel lines at the time, giving reduced performance on newer processors with those kernels.


Read the whole story
vitormazzi
35 days ago
reply
Brasil
Share this story
Delete

NÃO SAIU DE GRAÇA

1 Share
Petrobras sai do carro às vésperas do GP do Brasil

RIO (contem pra outro) – Há alguns meses o curioso e histriônico governo brasileiro saiu alardeando pelas redes sociais que iria romper o contrato da Petrobras com a McLaren. Chutaram uns números irreais, utilizando a típica estratégia dessa gente de dizer qualquer merda e ficar torcendo para que a patuleia acredite. Falaram em coisa de 800 milhões de reais, pouco mais, pouco menos, no que se considerava um patrocínio sem sentido.

A realidade: o contrato entre a empresa brasileira e a equipe inglesa era de 60 milhões de libras, valor que hoje equivale a 300 milhões de reais, por seis anos — ou seja: 10 milhões de libras por ano para estampar a marca nos carros, macacões, uniformes dos integrantes do time e desenvolver produtos. Os dois primeiros anos já foram pagos.

A McLaren, é claro, não rescindiu o contrato de graça, com medo dos tuítes de Carluxo, das lives de Dudu do Cheeseburger ou da truculência dos milicianos amigos de Queiroz. As partes não confirmam, alegando confidencialidade, mas a multa que a estatal terá de pagar para o time laranja é de cerca de 100 milhões de reais — ou seja, 20 milhões de libras, o correspondente a mais dois anos de contrato.

Diferentemente da primeira passagem da Petrobras pela F-1, com a Williams entre 1998 e 2008, a parceria com a McLaren não rendeu absolutamente nada do ponto de vista tecnológico. Nenhum combustível produzido pela companhia foi aprovado pelo time.

De 2014 a 2016 a estatal já havia voltado à categoria com a Williams, basicamente para ajudar a pagar os salários de Felipe Massa — não foi o piloto que levou o patrocínio, mas é claro que a Petrobras não iria se associar ao time se não fosse a presença do brasileiro como titular.

Quando se junto à McLaren no começo do ano passado, a ideia da Petrobras era reproduzir a colaboração que tivera com a Williams que acabou rendendo bons frutos em termos de pesquisa e tecnologia. Mas a empresa já tinha sido devastada pela turma de Curitiba e suas prioridades eram outras — a maior delas, entregar o pré-sal. Por esse ponto de vista, de fato o patrocínio não fazia muito sentido.

Aguentem a propaganda do governo hoje. Pelo Twitter, como sempre, a Bozolândia vai festejar mais uma peitada do mito no sistema. Claro que ninguém vai falar da multa.

Mas a gente fala.

The post NÃO SAIU DE GRAÇA appeared first on Blog do Flavio Gomes.

Read the whole story
vitormazzi
35 days ago
reply
Brasil
Share this story
Delete
Next Page of Stories