just
archive.is
techcrunch.com
Other urls found in this thread:
extremetech.com
wccftech.com
libreboot.org
linuxreviews.org
powerpc-notebook.org
zombieloadattack.com
twitter.com
Intel is dead as a company until ~2023, when it will probably recover.
nVidia will be dead post 2023.
But this news doesn't have anything to do with that.
AMD has similar holes as well. The best solution is to attempt to use open instruction sets. PowerShills should know that POWER is owned by IBM and the consortium around it is not free libre at all. Arm is in a similar situation, where it is one liscenser to all manufactures. Await RISC-V or look into older hardware you can trust.
Waiting for rule 34.
That would be cool, but who cares? Normies? No they just want to have their botnet running smoothly. Big companies? They don't have any choice, because most software works on x86 or x86_64 processors and operating systems.
That's what happens, when you trust a botnet company.
this; the goyim don't care about these vulnerabilities they already have their entire lives in the cloud.
the people who do care though are big tech; imagine the financial damage if all the data they collect for sale was dumped for free.
Who are these (((Security researchers)))?
I would bet anything it's intel themselves. Intel knew damn well these vulnerabilities existed and were exploited by governments but these exploits are stale now and they need to push new chips; they need to get everyone else to stop using these old chips and buy newer botnet with fresh vulnerabilities.
This. Some subscribers to the (((Intel))) botnet didn't pay the required racket fees so they are pulling the plug on these 0 days.
I mean seriously the company is called INTEL and creates INTELligence chips. How much more obvious do they need to be for people to take notice?
Interesting aside: The movie (2011) that clip is from was out 2 years before Snowden's revelations (2013).
Movie: Echelon Conspiracy 2011
I told you but you didn't listen. You should have listened.
Clearly, this must be because of Unix and C.
Yes. Massive Unix braindamage here.
I wonder where Zig Forums would be right now if Jewntel had fallen for the parallelism meme during the Bulldozer days instead of riding the quadcore train for all eternity through sheer force of shilling.
Would there be 12 core CPUs in Thinkpads?
If there's no exploits and no actual technical information about this attack how do I know that this isn't just a bullshit microcode update to ruin processors to make consumers buy the latest thing like the specter microcode updates?
I never recomened updating binary blobs you don't understand or control. Because for all you know it could be malware. If this is an actual problem it needs to be mitigated at a level the user can examine and then trust like the kernel or compiler level. Otherwise fuck you kikes for trying to install malware and destroy perfectly usable computers.
Yes, this vulnerability does not effect single user workstations.
There's PoC on github linked from the website
but where are the proofs
its probably one of those too big to fail companies so you fantasies might not become true in our lifetime
You're retarded. Intel Chips will stay at 14nm until 2023, Intel itself said that, their 10nm ones will be for special cases, not on their mainlines CPUs.
Meanwhile AMD is already on 7nm and Zen 4 will be 5nm.
AMD is also coming with 16 core processors with 4 threads per core. If you're a gaymer you might not benefit immediately from this, but every professional will. I work with CAD programs for civil engineering and this will be a blessing for calculations, that often take hours to process.
AMD is already eating up Intel market share, specially for servers. Next gen consoles will all use AMD APUs as well.
So tell me how smart you are now for not knowing any of this.
As I said, Intel will probably rebound after 2023, as they'll probably figure it out how to go to 7nm, but not before that.
hmmmm
Would Intel be inclined to invest in NVCTs/Optical computing or will they just go under in a wave of screeching as AMD becomes the new Intel and technology stagnates for another decade or so?
Will Google buy Intel provided the latter fails to recover?
2019~2023 AMD is going to dramatically increase its market share.
Actually, did you understand anything I said above?
I don't think AMD will become the new Intel. By 2023, the market could be split 50/50.
After 2023 things are uncertain. It seems every trump card is being unleashed now and a decade of stagnation after is very likely.
TSMC is going to have 5nm next year. Intel is planning 7nm in 2021. Samsung also already has 7nm chips in their latest phones.
extremetech.com
bump for freedom
Feature size is not the whole story. You know a lot less than you think you do.
Very informative. Thanks.
AMD's foundry. Also, using high density for AMD, 5nm+.
wccftech.com
They'll skip 10nm. But their plans are dire - the same problems afflicting their 10nm is also affecting their 7nm, mainly manufacturing problems. They use their own foundries.
Also, by the time Intel gets 7nm (if they fix their shit) AMD will be on 5nm.
In fact, the High End AMD Zen 3 APU (Arcturus) could actually be as powerful as a 1080Ti graphically.
Also, Intel is massively behind Core-wise and Thread-wise, it's like they froze in time.
Intel will be extremely behind, and nVidia will start fading into irrelevance.
Yes, but this doesn't affect AMD vs Intel.
What are the safest x86-64 compatible platforms which are still usable and acceptably power-efficient? Core2?
Once the stagnation happens for both amd and intel, which will be soon, then libre is the future. Why libre? Because you can optimize everything about it for maximum performance and security vs vendor lock in with propritary blobs like all AMD CPU/GPU's have and all intel processors/GPU's after haswell have. Atleast with intel, ancient AMD GPU's, and semi-old nvidia GPU's you can have libre everything. But anything brand new is fucking botnet and a waste of money that could be trashed at the whim of a bios/uefi update or a microcode update because you can't see the souce code to verify what it is doing.
Plus unless you are running a web facing server, doing video huge amounts of video encoding, and or are doing extremely parrellel processing workloads there's literally no performance reason to upgrade besides MAYBE battery life/performance per watt. The web facing servers shouldn't be using x86 anyways and should have jumped ship to RISC-V or POWER9 hardware already. The video encoders are kinda screwed for FOSS solutions. The extremely parrellel proccesing workloads can use heavily optimized older CPU's or POWER9/talos.
ARM and newer X86 are inherently botnet and should be avoided if you use libre software as you can just recompile it for such anyways. You should not be using non-libre software because you can't improve its performance nor audit it for security holes.
I wonder how much % performance AMD/Jewtel sacrifice on their glownigger hardware backdoors, its literally manufactured in chip design. Anyone know?
Possibly, at least the earlier chipsets can be librebooted libreboot.org
Too bad I'm still running a Phenom desktop.
Actually, ironically enough, the specter vulnerability is a problem because it was a feature meant to increase performance for shitty programs. It abuses out of order execution. Which is a performance clutch for programs that can't keep their order correct mind you on x86 you are forced to use the OOE scheduler and have no choice even if your program is good enough to be in order.
But if you mean measurable loss then look at the secondary proccessor for intel ME and AMD PSP, it uses 1-2w of power which is a total waste for what could be going to the CPU. Then there's the MMU which uses like 0.5 watts which is a total waste that could be mitigated by better programming. The PCI-E/PCI controller is uneccessary since the processor can directly adress the PCI/E space which would increase performance unless you need more then 3-5 pcie devices. DRAM in excessive amounts wastes energy and thereby performance, if you use 256MB of RAM you should only have 256MB of physical RAM. All this energy/watt wasted creates heat and slows everything down like contributing to overheating the processor. There a bunch of little things like this that really add up and waste energy/performance on modern computers because either programers and programs are shit or because its a forced meme like intel ME/AMD PSP.
That's what you get for ruining technological progress by way of paying vidya devs to optimize for 4 core 4 threads only so the bad goys with their novel approach to multi-core parallelism don't get off the ground.
Then you sit around peddling endless amounts of quad core rehashes advertised by gayman benchmarks and Jewtubers while the bad goy competitor suffers enough for your ol' pal Nvidia to steamlol them by doing the same as you, leading to total tech stagnation by CY+0.
MRAM when?
But reddit told me its good for Windows 10 to use 15GB at idle, because it will make my handful of 50MB programs run faster
Fucking bloat. My browser doesn't even use 50MB as it uses 49MB after an insane amount of optimization and no I don't use a text browser. It is a firefox derivative. Mind you the 49MB is the second highest thing The first is the linux kernel topping out at 80MB in memory after optimization short of editing source code or disabling netfilters/GPU drivers used for ram on my system and that's fucking bloat. I would really like to bring everything down to 128MB for a GUI similar to windows XP realistically, but I still yearn for 640K. mind you this is before using something like uclibc or a 16/32 bit data bus for smaller size of everything allocated, 64 bit is useful for compiling huge stuff like gcc/clang with many threads or jobs
Well I compiled the attacker and client proof of concept code and it doesn't work/retrieve the secret data on haswell series intel CPU's with my setup as root or a user.
...
Any proof AMD is vulnerable to something similar?
Best part of this is that you need to disable SMT for the Linux kernel patches to provide any protection, see
linuxreviews.org
Congratulations, your Intel 8 thread CPU is now a 4 thread CPU.
When will it end ?
You fucking kidding me? What would I connect to it? I don't think I can use 3 GPUs at once. I could connect a superfluous soundcard with the same shit as already on the motherboard and I can't think of anything more. Maybe an ethernet card?
Isn't that stuff really expensive?
If it's reasonably priced, hopefully soon user.
powerpc-notebook.org
Why? Once it gets UEFI/some BIOS it would be a free platform, right?
Hitler never said that. 100% amerimutt propaganda.
The Power9 motherboards are (cheapest one is around $1K), but the chip prices aren't that much different than what Intel and AMD are offering these days.
See? That's what I'm talking about.
Why would anyone buy a 1k$ laptop?
Not that guy (and very happily running an EPYC machine) but it's pretty unlikely that AMD doesn't have any timing side channels in their OOO silicon.
This whole debacle is certainly making it more interesting to research in-order processors with exposed pipelines again.
It's because of bulk pricing. The people making the hardware are ordering much smaller batches, so the price per unit is going to be higher than what you'd spend on a device that uses an Intel motherboard and chip.
*The people designing the hardware...
Fuck.
So what choice do we have ?
EO-meme68 ? Expensive Talos system (which needs a fucking GPU..) ?
Screw you optics fam, i'm tired by this shit :
kernel /boot/lincuck quiet pti=off spectre_v2=off l1tf=off mds=off nospec_store_bypass_disable no_stf_barrier
I want to shed light on some things.
The first machine capable of out of order code execution was CDC 6600 (1964) and ARPANET was not established until 1969. So dynamic code execution was originally meant to be a performance enhancement back when computers were pretty slow.
Here's the thing. AOL (a company who's purpose was to gather information on Americans) became a thing around 1995 along with Intel's Pentium chips and of course Windows 95. I would guess deliberate backdoors by hardware companies started about that time. Some people were saying that this really didn't happen until Pentium 4.
I've looked into Spectre/NetSpectre a little bit but it's lower level than I understand. I've talked to one of the researchers and asked him if you can do anything useful with NetSpectre like dump an SSH key. He said he hadn't tried it.
It looks like this new generation of side channel attack could actually be used for dumping keys or password hashes for remote authentication. That's pretty fucking cool. Security researchers who figure out this really low level shit are pretty smart.
zombieloadattack.com
If you intel's general stock trends since they went public, it's pretty obvious that they will recover.
It would be cool if they made RISCV chips that didn't cost an arm and a leg. Maybe that will happen some day.
ARM is where intel/amd copied the ME/PSP from, on ARM its called ARM trustzone. Its actually worse then the PSP/ME because instead of being a dedicated coproccessor it takes up die space and instructions on the main proccessor. It had extras features that are bloat and backdoors at the same time like a securezone vs non securezone which is just a copy of the linux kernel's userland vs kernel thing in a sense.
an in reality its nothing just like all the other vulnerabilities that no one has exploited yet.
haswell saves me again