Browser ENGINE Thread

Zig Forums has always been known for it's in depth browser threads, but how often do we really look under the hood and discuss the rendering engines that run these browsers and do most of the actual heavy lifting of displaying a webpage? Right now there are three main browser engines:

Webkit: Made by apple. Written in OOPC. Used by Safari. Has official Gtk+ binding for ease of use by pajeets. as easy as gtk+ can be, anyway. Also has external projects making bindings in many other languages.

Blink: Made by google. Originally forked from webkit. Written in C++. Used by Google Chrome. As far as I am aware, there are no bindings for it. If you want to roll your own browser with it, you better know what you're doing.

Gecko: Made by Mozilla. Infected with Rust. Used by Firefox. It used to be good before Mozilla went full retard. Like blink, there are no bindings for Gecko.

Other engines of note:
NetSurf: Originally built for RISC OS. Programmed in ANSI C. Doesn't have HTML 5 support, and is just starting to get Javascript support.

Goanna: Forked from Gecko. Used by Palemoon. Like Gecko used to be before Quantum and the webextensions bullshit.

Servo: Mozilla's experimental engine. Extremely infected with Rust.

Attached: 740fc8f7d1844ca7957939d73db3b454adc13518cfbb32138983245324456a4c.jpg (1280x960, 168.32K)

Other urls found in this thread:

amp.reddit.com/r/firefox/comments/84qhai/quantum_is_still_terrible/
computerworld.com/article/3238075/firefox-quantum-a-leap-forward-or-a-fatal-trip.amp.html
quora.com/Why-is-Firefox-getting-worse-with-each-new-version
sourceforge.net/projects/otter-browser/files/otter-browser-1.0.01/
qutebrowser.org/
developer.mozilla.org/en-US/docs/Web/HTML/Element/b
developer.mozilla.org/en-US/docs/Web/HTML/Element/i
twitter.com/AnonBabble

How about Opera's old Presto engine? And Trident?

Anyway does it really matter? I doubt any of us can even differentiate between those engines.

Those aren't being actively developed AFAIK. I believe the ones mentioned in OP are the only active ones.

Some engines are easier to develop for then others and some are made intentionally to be retarded I.E: Blink

Attached: 3485px-Timeline_of_web_browsers.png (3485x2580, 395.92K)

Why are you a faggot?

Webkit seems like the only one that wants people to use it outside of its parent project.

firefox quantum is the reason why I haven't switched to chromium yet. firefox was such a slow piece of shit pre-quantum that any improvements that paint over mozilla's past mistakes is welcome.

>Muh! Quantum made me stick around! It's not that bad! I swear!
Here's a site better suited for you:
amp.reddit.com/r/firefox/comments/84qhai/quantum_is_still_terrible/
computerworld.com/article/3238075/firefox-quantum-a-leap-forward-or-a-fatal-trip.amp.html
quora.com/Why-is-Firefox-getting-worse-with-each-new-version

not related to tech
elaborate more
yes this is true. I was literally going to switch chromium and block all google domains with hosts file as a temporary solution
>amp.reddit.com/r/firefox/comments/84qhai/quantum_is_still_terrible/
literally a decade old thread
>computerworld.com/article/3238075/firefox-quantum-a-leap-forward-or-a-fatal-trip.amp.html
>quora.com/Why-is-Firefox-getting-worse-with-each-new-version
Maybe he shouldn't spam the stop button like an autistic monkey and just click it once?

It's a few days short of a year old...

Maybe you shouldn't disregard things that you don't like, turning a blind eye to them.
The biggest problem remains: many pages render wrong with Firefox and it still has massive memory leak.

It was an exaggeration.

The only XUL extensions I care about were ported to webextensions so I'm not turning a blind eye to anything actually.
Firefox can't render webpages in the exact same way Chromium does without forking it entirely, so it's up to webdevs to recognize Firefox as a legitimate browser too and optimize their code for it.
I don't know. Firefox never used more than half a gb of ram on my machine. Though admittedly I turned javascript off, maybe that's why.

The opposite is happening right now.
One thing is JS, but the other is how bad it manages its instances.

I don't think there is much to discuss about browser engines. They all suck because they implement a batshit insane set of standards that nobody adheres to anyway. You can get a modicum of sanity back if you forsake some of the more idiotic parts like Javascript, but the way NetSurf is going shows that people (think they) want that and CSS still has plenty of dirt.


The Presto source code was leaked at some point, but the retard who leaked it did so on fucking Github of all places. Not sure if backups exist.

Firefox has always chewed through RAM. I've been running it as a daily browser since Firebird 0.6. It has been a constant problem and the only reason people have put up with it over the last two decades is the fact that it was the only browser with decent ad-blocking for the longest time.

...

Firefox used to be good, before the piece of shit that is quantum. Previously, I never noticed any lag or slowness. Now TBB will regularly hang for several seconds for no reason. It really isn't suprising that so many people jumped off the firefox ship after quantum. What is surprising is how mozilla shills continue to deny this. I thought they'd mostly switched over to the "oh we need to use firefox because muh web freedom" tactics. Here I see they're still trying to push the performance angle, despite everyone having seen it to be untrue.


I have a copy lying around if someone wants it.

Everything relevant/basic was ported to quantum. I wouldn't even use more than uBlock Origin on a generic browser profile, and that was available on day one. The only bad thing about Firefox is telemetry and default settings, all of which can be removed in a minute of tweaking. Firefox is the ONLY browser which has a goal of letting users gain web privacy. Nothing on WebKit/Blink can even compete.

The absolute nerve of this guy.

It's the only browser which makes it trivial to make yourself anonymous because Mozilla works close with Tor browser devs. So yes. Firefox is the only genuinely pro-privacy browser.

traxium's tab tree is still XUL only. All the new style tab trees are crap, they use the sidebar api, so you can't have your bookmarks open at the same time. And even basic configuration like hiding the tabs on the top require delving into obscure configs. Not a good UX.


If firefox was genuinely pro-privacy, then the TBB config would be the default. What you mean to say is that firefox is the least bad of the major browsers, which isn't a high bar.

[citation needed]
pozfox calls home constantly and even comes configured (by DEFAULT) with the option enabled for them to download and install (((studies))) in addition to the regular "telemetry" which is also enabled by default.

My perfect browser would have its own renderer that only works for some plain HTML and selected non-cancerous CSS. It has a fallback renderer (Blink by default) for sites that break, but that always requires explicit user action. It would support both the whole page in fallback mode, and also individual HTML elements. It would natively support uMatrix/uBlock like functionality, and the element blocking tool has an extra list of divs to always render in fallback. It would also not fully trust CAs by default. Instead it would encourage manually verifying certs for sites you know and intend to use again. There's no obnoxious warning when visiting a site with an unknown cert, just a special icon in the URL that means "you haven't trusted this site, but the CAs do" as opposed to "you have explicitly trusted this cert". Whenever the cert changes for a site you are warned about it and have the option of trusting the new one as well. The cert dialog includes things like cert information and how many CAs trust it to help you decide. When on https sites, http content is not loaded by default, instead it renders a placeholder which you can click to load it.

Make it then you fucking idea guy

Attached: eca5709638fe3a523839ecfeb3f1ca8e91ec30f75c10ae10f41a4466c85c8d4f.png (296x324, 68.04K)

I have this regularly with out of the box v52 (though interestingly enough not with my configured one). Firefox has been a colossal piece of shit at least since version 4; that's where I switched to Opera because my potato could no longer open the goddamn thing without swapping.

I do.

It wouldn't because that would break 90% of websites. Firefox assumes that if you're interested in privacy you'll change the default settings. Default settings are there for normies. If they were any different Firefox userbase would drop below 3% almost immediately.
When I say Firefox is good for privacy I'm mainly speaking about it's engine, not the default Mozilla release and not it's default settings.


1. about:preferences#privacy
-Choose what to block
--All Detected Trackers (disable) [keep DNT header disabled]
--Third-Party Cookies (enable), set to All
2. about:preferences#general
-Language and Appearance
--Fonts and Colours
---Advanced
----Allow pages to choose their own fonts (disable)
3. about:config
privacy.resistFingerprinting > true
webgl.disabled > false
privacy.firstparty.isolate > true
4. about:addons
install "uBlock Origin"
install "user-agent switcher"
5. user-agent switcher
set your user agent to "Mozilla/5.0 (Windows NT 6.1; rv:60.0) Gecko/20100101 Firefox/60.0"
6. Set your current resolution to 1000x700 or restart the browser, it will switch to 1000x700 by default each time it's open.

Done. You now have the same fingerprint as Tor Browser. Disable javascript by default and all 3rd party scripts in uBlock Origin and you're pretty much trackable only using your IP and browsing habbits, which you can't fix without Tor or VPNs. As for the telemetry, look into:

?
does tor browser fucking enable webgl?

Long gone are the days when every little browser author actually wrote engines from scratch, and the W³C maintained their own "reference implementation" browsers like Arena/Amaya.

Now that M$ finally threw in the towel on Edge, we're inches away from the WHATWG declaring "HTML"5 defunct in favor of Chrome's source tree, with the only protection being Mozilla. Apace with their effort to transform into Chrome, the industry will finally throw away any semblance of coherent standards for Apple/Google to comply with out the window.

Still standing:
The deceased:

Attached: mozilla gecko future.jpg (333x500, 35.75K)

No, it's a typo.

Lynx is not a meme, I use it for a fair share of stuff. Codewise it's rather uncomfortable spaghetti C from the nineties, but since it's HTML-only there is much less attack surface. Like I said earlier, if you want something sane, you have to massively adjust your expectations. What the users expect™ is broken idiocy, and so broken idiocy you will get if you follow them.

:(

Forking != embedding. Forking implies you have to maintain your copy of the source tree, any time the upstream changes you have to merge that. Embedding implies there is a stable api, any time the browser engine changes you can just pull in the changes and it will continue to work against your code (version bumps notwithstanding).

Remember years ago when both Chrome and Firefox were shit that would crash "this tab has stopped responding" and leave orphan shit everywhere. Or after a few days they'd use like 2gb on the home page. Or tabs using 100% cpu, or videos glitching and freezing or audio just plain not working.
Lol what a nostaligia trip nothing like that happens any more.

Wouldn't that describe CEF & QtWebEngine?

W3m is surprisingly usable for what it is. I guess mostly I just think it's cool though.

Is this because of the engine, or because of the chrome? Could a webkit browser be made as "privacy focused" as Firefox?

I used that list to find not botnet browsers. Every single browser on that list is subverted is the criteria.

These are not the same though and with ms bending the knee google, apple will probably give up webkit sooner rather than later as well.

I want that too. I'll mirror it forever.

They're far more than similar enough that if Mozilla gave up, complete monoculture would exist. It would be utterly impossible to distinguish between intended behavior of WHATWG's "living 'standard'", versus a bug in the one single rendering engine that "implements" it.

Maybe they can but not a single WebKit/Blink dev gives enough shit about privacy nor does anyone other than Google have enough influence to fundamentally change the engine to allow proper privacy options. Just look at Brave. Their fingerprint protection doesn't work at all, it's so bad that even Tor tabs can uniquely identify you.

Their code is open-source nigger, look at it some time and despair. Also compare the age of the code with the amount of CVEs; a ton of security problems are in new code. And like I said before, HTML-only has advantages: An easy 99% of exploits require Javascript.

What does a rendering engine need to have the security and privacy focus you're looking for?

It shouldn't have any features which can remove your anonymity. There's no reason a browser would need to know my exact hardware and screen size.

used to calculate view-ports for responsive-design

That kinda sounds like an ass-pull. Got anything more substantial? I'm thinking of making a rendering engine as a little project to work on on the weekends.

Add Pale Moon (Goanna) to Still Standing.

wew

It's not needed. Firefox works fine when you deny websites screen size access.

It depends on what your willing to settle for as far as looks go. This breaks responsive design - which is a relatively new trend anyway.

I think a better solution is making websites declarative rather than imperative - so that JS isn't a thing. A perhaps more realistic solution is some kind of layering of javascript, where one runtime can access them dom and handle layouts, and another can preform AJAX requests and pass results to DOM runtime one way. So that information about screen size etc can't be leaked.

Disabling JS is one thing, but these CLI browsers don't even have reasonably complete (or, in many cases, any) support for crucial features such as CSS or SSL. CLI aside, the experience is similar aside from the ability to use modern crypto certs much easier to loading up an early 2000s-era version of NS or IE and looking at modern sites, everything is so incredibly broken.


It's just a fork. Maybe it will become something more some day, but that's a long ways off.

Brave

Do you wew at alchemists? Yeah I'd never finish it in my lifetime, but it's a fun thing to fuck around with.

Uses Blink.

Scamware

Bump for Presto source code.

This time line is garbage. Netscape Navigator was created as Mozilla, with the express idea that it was a "Mosaic killer". Spyglass Mosaic (MSIE) and NCSA Mosaic are also obviously related. As for the khtml forks, the time line is clearly Konqueror -> Safari -> Chrome.

This

Are there any alternatives to the DOM? Is therea better way to parse out a webpage?

Is it just me, or are most of the good privacy-respecting browsers these days Linux Exclusives?

Can any of you recommend me something that’s not based on either Chromium, nor Firefox, as well as supporting multiple OS platforms (Midori does not count since the Windows version wasn’t updated since 0.5.11)

Isn't Midori open source? Just compile it.

It's a real waste of effort. If you meant lacking BSD etc support, afaik furryfucks works there too.

Midori crashes constantly for me. Nigh unuseable.

Would you mind if I turned off the lights?

Attached: make anon less of a faggot.jpg (1508x1000, 233.98K)

Attached: vivaldi.png (1366x730, 85.92K)

You're looking in the wrong place mate.
Might I suggest Otter Browser?
It's basically the IceCat of Opera browsers, but written completely from scratch using Qt5.
Did I also mention it's open source AND cross-platform? As well as one of the few Qt spyware-free browsers currently fully supported on Jewdows and Fagintosh?
I tried it myself, and it's great for what it is. Give it a spin and get rid of all that other nonsense.

Attached: 1.png (1600x900 133.79 KB, 47.58K)

Does it run on old computers without SSE/SSE2?

I don't think that was specified anywhere. That's for you to test out since you mentioned it mate, it's obscure for a reason. If it does not work on your PC, it's just another thing that can be implemented in a future release since it's Qt Open Source, and the developers can be contacted on SourceForge and GitHub.
All I know is, it still supports XP and 32-bit computers:
sourceforge.net/projects/otter-browser/files/otter-browser-1.0.01/

If you just want the bare minimum, might I suggest the qutebrowser:
qutebrowser.org/

Attached: 600px-Otter_Browser_Logo.svg.png (600x600, 189.68K)

Dillo is still alive.
And you forgot Links in "alive".

Depends on the BSD. OpenBSD is free of poz, while freebsd has taken a (((code of conduct))), with the subsequent decrease in package and ports updates and loss of users and donations.

Always separate content from presentation, that's the way it worked in SGML, and that's the way it should've worked from day 1 with HTML.


Dillo hasn't updated in 4 years, good catch on Links though.

Добро пожаловать.

I used it for a long time. It does connections that you can't turn off and it's fucking slow. Many of the features are useless bloat.

Falkon is really nice.

Literally just a UI for chrome/safari.

Zig Forums does not work properly with Falkon. There is no floating response window, images open in a new tab always and there are no response links to a post. Pic related is Falkon on the left and Firefox on the right.

Attached: Screenshot_20190317_134402.png (788x207, 34.83K)

If only they stopped hosting shit on sourceforge

No request blocking, it's loading some unnecessary js right there in the pic lol

SGML is not exactly something to imitate, it was a gargantuan pile of shit that nobody really understood. Whether it's better than current HTML can be argued, I guess. How do you expect a separation of content and presentation to work? Preferably it shouldn't include humans writing semantic markup, because if history has shown one thing, it's that this doesn't ever happen. Feel free to give an example, say with imageboard threads.


Would be nice, but they would probably opt for Github instead, which is basically a rerun of the SourceForge bullshit.

FrameMaker is IMHO the best example, though there are others like Interleaf and 3B2.
Ding ding ding, we have a winner!

Interesting fact: The original web client, Nexus, was not a web "browser", but an integrated GUI HTML browser/editor, as were the LibWWW API, and all of W³C's subsequent testbed clients. The intended paradigm was using a GUI to read and write webpages (and yes, style sheets did exist, though they were very limited in terms of flexibility), while HTML was never normally intended to be seen by human eyes nor written by human hands.

The idea of a read-only "browser" was the accidental result of producing the supposedly specialized Line Mode Browser, the first CLI client, and also the first crossplatform (Nexus was NeXTSTEP-only) client. Since Line Mode Browser was how nearly everyone outside the NeXT bubble first experienced the web, combined with the also crossplatform CERN httpd server, this meant nearly everyone who wrote clients like Viola and Mosaic thought the "normal" way to use the web was to write HTML by hand in a text editor, and view it in a read-only browser.

CERN/W³C, in retrospect, considered this a critical failure on their part.

Something like an imageboard honestly shouldn't function inside a website, with a better fit for the web format more closely resembling a wiki (e.g.: heavy reliance on powerful automated transclusion, Boolean operations, etc.) to reuse, generate, and organize content based on machine-readable semantic tags.

For something more like an imageboard, a database-centric client API like USENET would probably be better than a website to begin with, though on top of that transport/storage layer, a markup-based semantic and presentational standard would be superior to plaintext.

Javascript is disabled in the left. Has nothing to do with browser engines.

The first two are just "my extensions don't work since they fixed the fucked up security and model. I don't want multiprocessing, I just want my extensions to be able to be massive security liabilities and performance roadblocks."
The last one is literally just an unsubstantiated conspiracy theory that makes no sense, with Google donating a bunch of money and then allegedly giving secret orders to Mozilla to make the browser worse.
You're a fucking retard. You found the worst possible arguments and just bought them hook line and sinker because they agree with what you already think.

If you hate Firefox, have some relevant, modern, and actually correct reasons, instead of taking the first stop on the confirmation bias engine that is Google, you stupid fucking idiot. It's so fucking obvious you used google, you linked to a fucking year-old reddit AMP page, you goddamn retard.

Be careful with resistFingerprinting. It currently reports your resolution as your window resolution instead of screen resolution, which often gives you a very reliable and specific fingerprint. It does good things with your user agent and some other features, but it's still rough enough around the edges as to be considered experimental at the moment, and can currently make your fingerprint more unique rather than less.

CSS is actually essential to the modern web though. If you want a hyperlarp browser that renders HTML only thats fine, but CSS is actually a HGUE benefit - it separates the semantic structure and content of information from its presentation, and is rapidly interchangeable, including by users. Not to mention, its currently widely deployed as virtually every website on the internet uses CSS.

If anything about the web is worth keeping, its CSS separating content from presentation.

Good thread. Bumpan

Attached: 505186109eb5ace6ee6610bda4a06cba0f600530326cce4560a72f104d788977.jpg (694x530, 140.66K)

Javascript works on other websites. I'm not saying that it's the engine's fault, but something is wrong with Falkon for me.

Webshit and javascript weenies only exist because C and shell scripting are not powerful enough for websites. On a networked Lisp machine you could just download standalone lisp scripts from a webserver, run them like any other program, and it could do whatever it wanted: no extra weenie languages required. These solutions are obvious to real programmers but UNIX braindamage keeps you weenies from seeing them.

what to do with a tree structure. They're usually had a tree structure.(Partly to get back on this one forced to admit, I don't know, whatever theanswer to the question that the PDP-11 compiler class with its own locateall storage, probably because the middle of a block structure. They'retrying to rational semantics for C. It would programmers who use lots ofglobals and wouldn't know whatever the instructured file system? Myguess is Multics, in the middle of a block with its own local storage,probably amount to a translationalize the late '60s. the late '60s.hit!" butstopped myself just in time someone counters my claim that the instructorstated "Before Un*x, no file system? My guess is Multics, in time.For reason to give me a really had it.But because it lookslike it lookslike it lookslike it does, and would probably because they reasons I'm ashamed tohappen when I don't know, whatever the delusion "what's supposed toinvoke this procedure, not entry to a translation "what's supposed toallocate all storage on entry to a block.But it's much worse than than because you can jump into the middle of ablock, the compiler is forced to happen when I do ?"used to happen when I do

))))))))))))) - this is the future and unix weenies can't even compete
LISP when? lisp is so perfect

It doesn't happen because webdevs always try to stretch the technology further than it is meant to be used. You can write nice semantic static pages, but people always want to shove in decorative shit that serves no semantic purpose and makes the site less usable, but looks fancy.

I'm making a web site currently and my "client" (quotation marks because I'm doing it for free) is very satisfied with my proposal. It's clean, simple, precise, 100% static, responsive, no Javascript. It's funny, we were looking at some other related websites and he found all that added nonsense distracting and silly, which leads me to believe that bad websites are not necessarily because people want them, but because webdevs want to have fun on the job.

As for writing HTML by hand, I'm using a Lisp's s-expressions. I can write content as SXML and at any point I can splice in the result of a function call. It's a very comfy setup, much more compact than any templating language I have seen.

somewhat based

jokes on you, "semantic" doesn't actually mean anything. you got memed. HTML is a good idea (an obvious idea) with a terrible implementation. it's just a bunch of syntactic elements denoting stuff like paragraphs, images, text style (bold/italic), etc. it's like a text document but not fucking useless

That's what semantic means. Except for the style (bold, italic), that part does not belong into HTML. Use em and strong instead and use CSS to style it according to your wishes. The b and i tags no longer mean bold or italic anyway.
developer.mozilla.org/en-US/docs/Web/HTML/Element/b
developer.mozilla.org/en-US/docs/Web/HTML/Element/i

Actually retarded. Nothing prevents that from occurring right now. In fact, it somewhat exists, its called native software.

You could trivially write a wrapper to download index.lisp and execute, but this is exactly the opposite of what an non-idiot would want for a variety of reasons.

First, security. I do no want every website I visit running native code on my computer. Simply fuck off. Part of why browsers are useful, is they provide a limited execution model for code that doesn't allow all of the complexity that local access would. Imagine every single privileged escalation vulnerability ever discovered pwning your machine because you visted a lispsite. Fuck off.

Second, browsers specifiy a complete executation environment including a GUI, storage, DOM API's, styling, etc. Its incredibly complex, and cross platform. You can simply just replace all that with lisp. Lisp is a programming language, not a cross platform spec for rending documents. You could import all that kind stuff and rebuild web 3.0 but why ?

The actual solution is making the web STRICTLY declarative and providing NO execution model. This would make the web much more secure. It would also force a bunch of new technology to handle things like AJAX, and JQUERY like API's through declarative syntax.

Semantic does mean something. Having structual elements of the page denote things like articles, quotes, citations allows cool features like "Reader Mode" for FF. It could also allow automating fact checking etc. It could be a very exciting feature with a semantic web.

This is actually pretty fucking schway

Anybody remember back when CSS was first being paraded around, and a common notion was that users could apply the same CSS everywhere they went? That users would be the ones writing CSS, not webdevs?

Instead, every individual element of every page on every site has its own CSS glued to the HTML in a non-semantic blob just as architecturally invalid as the "spacer GIF & invisible tables" bad old days of HTML4. Webdevs still think they're writing glossy magazines in QuarkXpress, instead of semantic hypertext meta-documents.

Or something as simple as trivially and unambiguously scraping a store's page for the price/part#/availability of products.


I'd actually be okay with an execution model for online stuff like the web or whatever, just as long as it wasn't being abused for things other parts of the standard (HTML+CSS, in the case of the web) are supposed to be used for instead. The biggest problem isn't security (any competent sandbox architecture dating back to the 1960s could fix that), but bloat.

I'd like a separate Turing-incomplete subsets for the scripting environment, in addition to "complexity profiles" that would limit the number of discrete operations in tiers (e.g.: 100ops, 1Kops, etc.) as well as instantaneous working RAM limits. This would impose necessary discipline for what are and should be simple tasks, while still allowing the clientside flexibility and power needed for many tasks.

Attached: ABC XML: Separation of Document Components Structure Presentation.jpg (640x352 76.04 KB, 74.41K)

I really like Brave.

Fucking browser engines. It seems like there should be a way we can slap some libraries together and have one. I know it's not really that easy, but it's just one of those things. I should be able to slap together some libcurl and libxml, and throw it into cairo and have a browser.

Good Idea!
Let's call it the "Pharaoh Browser".
With the embedded idea that "You are your own emperor, librarian, and scientist."
I already have a logo concept in mind.

Attached: pharaoh.png (1024x1024, 180.98K)