Input latency

Look at that lag. Lispmachinefags BTFO. It is literally slower than sending a packet across the planet.

danluu.com/input-lag/

Attached: latency.png (741x1024, 245.34K)

Other urls found in this thread:

danluu.com/term-latency/
blurbusters.com/human-reflex-input-lag-and-the-limits-of-human-reaction-time/
anandtech.com/print/2803/
pcmonitors.info/reviews/dell-s2417dg/
cwdohnal.com/utglr/)
anandtech.com/show/2794
twitter.com/SFWRedditVideos

...

...

Why are Cniles and weenies so thin skinned?

Attached: Screenshot_2019-04-25 Terminal latency.png (1492x756, 96.42K)

Those #T values are obviously fucking bogus.

On regular 6502 systems, it was ~24 cycles, not 3500.
68000 systems-- ~92 cycles.

Here T seems to stand for the Transistor count of the CPU.

I'm astounded they could fuck up that badly on custom hardware. This is embarrassing and makes anything with low latency requirements downright unusable on the workstation.

st's latency goes down by a lot if you edit the xfps and actionfps settings (I think you can get it into the 2ms range or lower). X11 also has higher latency than wayland

Good Lord how can lisp fags even recover.

bahahahahaha nice try lispfag

The worst example you could find is still faster than the lisp machine

I am shocked.

300ms is well past the point online games need god-tier client-side prediction to pretend they're playable. If you wanted to develop games for a Symbolics machine, you'd have to use fucking multiplayer techniques to compensate for your own machine's terrible latency.
Games are a great stress test for operating systems and programming languages.


If you're interested in the sauce, it's up at danluu.com/term-latency/

Attached: libbie_otherside_cropped_rotated.png (331x331, 45.57K)

...

"Muh games" is the reason Unix exists in the first place: a Multics developer named Ken Thompson was making a video game named Space Travel for fun in his spare time, but Multics was too fat to run it properly (to be fair, GECOS ran it even worse) so he switched over to a cheaper PDP-7 and wrote a much better version. Shortly after this he began writing a lighter Multics-like operating system for the PDP-7, ported Space Travel to it, and thus Unix was born.

Figures, Unix is a toy OS.

...

...when people with a clue discuss CPU interrupt latency, "T" means the number of T-states of the instruction decoder.

With the 6502, an instruction was at most 6 T-states (= phi clocks), and when an interrupt occurred, it went through instructions (w/ variable number of T-states on each) to push PC, push S(tatus), and set the PC to (BRKVEC). It was actually the most efficient CPU for doing interrupts on in its time-- the Z80 took something like 56 T-states to do the same thing.

...

Typical. Have some spoonfeeding:
>The next two columns show the clock speed and number of transistors in the processor. Smaller numbers are darker and blue-er. As above, if slower clocked and smaller chips correlated with longer latency, the columns would get darker as we go down the table, but it, if anything, seems to be the other way around.

If your OS and programming language shit themselves when you try making games, there's a good chance they aren't flexible enough for other demanding or unusual tasks either.

Name a task for a workstation that requires sub-300ms input latency in a terminal.

Part of what I figured was happening with the Symbolics was I/O handling, such as special character lookup and calling bound functions and whatnot.

That is, I figured the Symbolics program he used was like Emacs in that, when you press a key, an arbitrary function is run to determine what happens.

This would help explain why it's slow, because it's doing much more, but hey, the significantly more complicated UNIX machines doing far less are less than an order of magnitude faster, less capable, and customizable, but whatever.

The Symbolics 3620 had a graphical operating system, faggot. This latency completely rules out a lot of professional audio work (probably not coincidentally, I'm having a lot of trouble finding anything on the audio capabilities of Symbolics machines) and makes them very unattractive to other markets they tried shilling their overpriced workstations to, such as animators. For other stuff like 3D modelling, video editing, and programming, the worst 300ms of latency does is annoy people used to more responsive systems.

a computer that costs so much should be run everything

Its interesting to compare Symbolics to Synclavier.
Both ended up with "Ivory" editions that used a macintosh as the terminal.
General purpose computers struggled with audio pretty much up till Y2K.

True. It's just funny how the board has gone from autists shilling CRTs for their responsiveness to anti-Unix shitters defending absurdly high latency with "you don't actually need a responsive computer, do you?" niggery because a famous meme machine suffered from it.

The OP brought up terminals, retard. This is what the entire thread is about, did you only look at a few numbers and started writing shit without a clue?
Yes you did.

Typing.

The thread isn't about terminals. Both it and the article he links are about latency in general (see the titles: Input Latency for this thread, Computer Latency for the article) and to test workstation latency the author uses terminals. Given how Symbolics machines and Genera worked the boundary between terminal and graphical programs would have been smaller than in most operating systems, so do you honestly think most of those 300ms originated within the terminal itself? Even if the terminal somehow added 100ms on its own (highly unlikely), avoiding it would still give you a workstation that's at best 10ms less responsive than sending a packet around the globe. That's still well within "we need client-side prediction for this shit"-tier latency.
It's stupid, but not contradictory. People have done much dumber things before.

Can you tell the difference between 90ms, 60ms and 50ms?

Isn't that the oldest model that wasn't a CADR clone, i.e. Symbolics' first machine? I wonder how the later models fare.

That was the 3600. The 3620 came out three years later and was apparently a little faster thanks to some hardware improvements.

You are mentally deficient if you can't tell the difference between 90ms and 60ms.

fyi, it's 250 ms. You can get those down to 180 ms easily, but you have to be a fucking alien to get a brain timestep under 100 ms, let alone 30 ms

Are you Jewish? You must be, otherwise you wouldn't mingle my own words with such disdain.

I never mentioned reaction times. I talked about perception. And yes, you are mentally deficient if you can't perceive a difference between 90ms and 60ms.

Yes, most healthy layfolk can unconsciously detect (and trained experts accurately quantify) differences in total system latency well under 10ms.

To clear up this misunderstanding, the key point to keep in mind is that perception of latency is cumulative, not overlapping. That is, humans do indeed have a minimum conscious hand-eye reaction time of 200ms typically, but just a few milliseconds of additional latency on top of that will still noticeably impair performance in demanding tasks:
blurbusters.com/human-reflex-input-lag-and-the-limits-of-human-reaction-time/

Attached: reaction-times_thumb1.png (1550x460 151.73 KB, 40.18K)

Only a Jew wpuld falsely accuse others to be Jewish in order to futher their stupid claims.

What if its a game dev workstation tho

...

no ur a jew

There's also that humans can discern a lag at a much smaller threshold than their own reaction time. Reaction time is your time before you can react to something, but if you are moving around a mouse cursor that is lagging, it's going to be noticeably irritating at a far smaller threshold, and that goes for a lot of other input as well. Trying to play music remotely with people falls apart at very small thresholds.

input lag has nothing to do with CPUs you retarded niggerfucks. CPUs events are in nanoseconds or less. human perception is in hundreds of microseconds and higher

240Hz LCDs still look like shit in motion precisely because the pixel transitions (which are all well under 10ms) are too slow and non uniform.

input latency is a problem for all systems. games is only one tiny application. on the other hand most hardware such as a microwave has retarded broken debounce shit instead of any problematic lag

i'd be complaining too if i spend thousands on a PC and it couldn't run games. games aren't a special class of programs other than needing hardware compatability.

shut the fuck up retard. they're already slow as fuck but i don't know of any that is anywhere near as high as 300ms
and here, I'll name a task: Screen updates when scrolling text need to happen within a single frame. Otherwise on a CRT or strobed-backlight LCD, you will see double images while text is scrolling. Now on your broken ass system, you have to stop every time you want to read anything instead of how it used to work in the CRT days

CRTs are still more responsive in terms of latency as well as pixel response. I tried a 240Hz BenQ gaymer monitor and it's aids garbage. all the real reviews of LCDs like tft-central show that LCD tech is still shit too. Even a shitty old 60Hz CRT is better than most LCD right now (though I prefer 85Hz+ on actual quality CRTs). Your shit about "anti-Unix shitters" is wrong too. You can get low latency on non-unix non-C systems easily. Unfortunately, most developers are too incompetent to even comprehend the problems around input latency, including you, since you don't understand the problems of LCDs.

a what? vsync on 60Hz adds 16.66ms to whatever input latency a common PC already has, and I and everyone else who can competently game (this is a huge difference from "pro" or "hardcore gaymer XDDD") can feel it

That's the point, faggot. A workstation from the 1980s with custom hardware, a custom operating system, and a CRT somehow has hundreds more milliseconds of latency than modern computers hooked up to LCDs, even LCDs handicapped harder than usual by restricting them to 24Hz instead of 60hz. There is absolutely no excuse for latency this terrible and yet we somehow have shitters defending this latency solely because it's found on a famous non-Unix workstation.

I honestly can't sense the input lag added by v-sync

It's most obvious in faster FPSes like Quake.

Do you use a tv as a monitor? Any first person game feels crap with vsync on

Also

Only if you leave the config as default like a dumbass, it's 3.5ms with the fps up

because you're doing something wrong
some LCDs have tons of lag and some have none, it's completely random and depends on how bad the vendor fucked up. even within
UT99 and some other games will use the windows desktop settings (not sure if the same thing happens in Linux) for controlling the mouse so you need to disable "enhanced mouse precision" in the windows mouse settings while playing the game
it will be harder to tell if there's an input lag problem if the game's already running like shit and skipping / doubling frames here and there
Remember with vsync, 60Hz input lag is 16.66ms. 120Hz input lag is 8.33ms, 144Hz is 6.9ms. 240Hz is 4.16ms
then there will be no input lag (unless it's configured wrong. then it will degenerate to vsync lag). of course a laggy monitor or game will still lag

It's noticeable on any FPS whatsoever. Crysis X, Call of Duty X, Counter Strike X are some examples where I remember people complaining about vsync (and I've tested it myself and it's indeed bad). Moving a cursor let alone aiming in a 3D game are both applications where 16.66ms is way too much lag. For merely moving a cursor, unless you flick your wrist to select each button, you're interactively getting feedback as you move the mouse across the screen. If the feedback is slow, it's going to be painful and inefficient (you keep missing buttons or have to focus extra hard to click on them)

FIX CAPTCHA YOU RETARD NIGGERS IT MAKES YOU SOLVE TWO EVERYTIME YOU FAIL ONE

I play UT99, does that count?

Disabled enhanced precision and also applied MarkC mouse fix

No, Dell S2417DG set to 120Hz and with GSync disabled. GSync introduces a flickering effect on my screen for some reason. I leave it at 120Hz so that 60fps videos won't be interpolated. I could never get the 165Hz overclock to work anyway

I couldn't feel any lag even when I used a 60Hz LCD. But I can immediately see when there is screen tearing. I don't understand how people tolerate that shit with v-sync disabled

Quite slow, no? Personally, I always said Q3CPMA > VQ3 > UT2004 > UT99.

Yeah, that's what I >implied, and my linked article documents in detail.


Agreed, though they look better than lower framerate LCDs when combined with backlight strobe, which aside from its primary purpose of blur mitigation helps hide artifacts from more aggressive overdrive to reduce smearing. Not to mention the reduction of latency and greater amount of information being sent to your eyes every second.
That whole argument was about interpreted and/or garbage collected runtimes specifically, not non-*N*X/C platforms generally.


Note that while v-sync will (assuming the setting isn't broken by the game in question, which I remember it is in some cases) eliminate the additional latency it causes, that some modern games actually have multiple frames of 'in-engine latancy, even on PC. I vaguely remember CoD:MW may have had this problem, but Bethesda games (Oblivion, Fallout 3, etc.) definitely do:
anandtech.com/print/2803/


Quake III was tighter designed and more timeless, more "perfect", but at the expense of being simpler and blander. UT99 was chock full of way more cute details and personality, while remaining very solid on a competitive level. I'd give the nod to UT99 for being the better game in the era they shipped.

Q3 was more of a 3D tech demo than a game. The engine was a peach and everyone, including the players had access to it. It was a dream come true.

They don't necessarily (laymen may not be able to distinguish it from other artifacts of the monitor and media), but the lag from double buffered vsync is worse.
pcmonitors.info/reviews/dell-s2417dg/ says it has 3.51ms lag at 165Hz. As usual they fail to measure input lag at a reasonable range of refresh rates and is only measured for one color transition which could also give faulty measurements. I wouldn't be surprised if there was over 10ms of lag for 60Hz. It could be anything (including 0) though as monitor designs are completely random.
How do you even know how to disable vsync on UT99? For OpenGL you find in System/UnrealTournament.ini
[OpenGLDrv.OpenGLRenderDevice]
and put this under it:
SwapInterval=0
Or if you want to use system settings:
SwapInterval=-1
But these settings only work with UTGLR (cwdohnal.com/utglr/) which replaces the default OpenGL driver. I don't know if there's an option to disable vsync on the default OpenGL driver.
For Direct3D you find
[D3DDrv.D3DRenderDevice]
and put this under it:
UseVSync=True
True actually means disable vsync. As usual gamedevs are retarded. Of course maybe the bullshit way we use the term "vsync" now wasn't in use yet in 1999. The direct 3d vsync is laggier than the opengl/utglr vsync. Also the opengl vsync doesnt work on one of my computers (with AMD card). It just falls back to unsynchronized rendering.


LOL. but yes some shitty games do have a lot of lag on their own
Quake Champions might actually be the first FPS game to have better aesthetics than UT99. Too bad it has shit performance.

Lisp:
A bunch of braindead dodo hacks from the 60's. A complete mockery of lambda calculus. Lisp dodos don't know mathematics.

Average reaction time is 250. It's average not from person to person, but from instance to instance. Sometimes you can randomly react very rapidly and sometimes realization of important events occurring directly in front of you slips right past your mind for prolonged periods of time. Specific reflexes can work very rapidly but if it has to go through your cortex, your reaction time will never be under 200 ms unless by a fluke.

Can a brand new ferrari pull a 40 ton flatbed up a 10 percent grade? Yeah didn't think so.

"your eyes can't see past 30 FPS"

< You can't see them flickering at 30 Hz

"you can react faster than signal travels across neurons"

everywhere i go now i see shit flickering. the light on my razor flickers. the light on the outlet in the bathroom flickers. car headlights flicker.
blame the jews
[4 captchas were solved to make this post]

You learn some things when you've played the same game for 20 years. I use Kentie's D3D10 renderer btw

fuckos just because your reaction time is 200 MS does not mean anything below that is unrelated to humanity. Try playing music where every note is a 10th of a second off and see how it sounds.

But it is kosher, despite the difficulties.

Just when I think these discussions are meaningless I'm reminded of the hordes of normalfags who actually believe certain resolutions become meaningless past certain distances and that all that matters is whether you can spot the pixel grid or not and if they're not violently beaten or argued against at every chance possible the kikes at the display industry will stall progress to make more shekels.
Welcome to the democratic world.

Attached: j_law_putin.jpg (575x729, 156.52K)

Just do everything in GNU Emacs, it’s close enough to a Lisp Machine for all practical purposes.

It's certainly the closest you'll get nowadays, but it's not even remotely "close enough". Emacs is a joke compared to a real Lisp machine.

Input lag has everything to do with CPUs since they are the heart of the system and where input lg originates. Games are always more responsive with SMT off, and if you have a Gayzen, are more responsive in a 4+0 BIOS configuration. RAM latency also has a giant role in input lag since it's such a large bottleneck for the CPU. If nanoseconds didn't matter, then CPUs would stay clocked at 1 GHz at 1 cycle per nanosecond. Basically, you're a fucking braindead nigger monkey.

It helps to run at a framerate massively higher than your screen's refresh rate because then you tear multiple times every refresh the frame overwriting a frame that is currently being read is more similar to the frame it just wrote over, and it can reach the point where it's unnoticeable.
Or you can go for the best of both worlds and use a frame limiter that is a multiple of your refresh rate +1 while triple buffering, it's not single buffering levels of latency, but it's massivelly better than double buffering and gets better the higher the multiple.

Time is not measured in megasiemens.

Doesn't it also have to do with how we interact with hardware and go through so many abstraction layers just to issue instructions to it?

I'm thinking the modern gayme engines, JS browsers, and botnet OSs filling your pipeline with NOPs might be a likelier suspect.

Attached: adm.-grace-hopper-nanoseconds.webm (474x360, 3.9M)

It has to do with everything. Everything causes input lag. CPUs just happen to be large contributers since they do everything in a system.

Interrupt latency is one of the most important metrics to gauge whether a system is high or low latency. Ryzen processors have higher interrupt latency than mainstream Intel processors due to their dogshit architecture (for latency). On Windows, an 8086k @ 5.2GHz can get .28us interrupt to DPC latency, whereas a Ryzen can only get .7us, even at 3200 C11. If you set it to 4+0 in BIOS, you can drop down to .48us on the same timings. Still worse than Intel by almost double the DPC latency. HT/SMT should obviously be disabled for low latency applications.

Why care about sub-microsecound interrupt latencies? Because of scaling. 4C/4L vs. 8C16L is a massive difference in input lag on a Ryzen. You can help by forcing NUMA or using isolcpus, but that's only on Linux systems and doesn't really negate latency as much as disabling cores through BIOS.

Want proof? Go disable SMT/HT in BIOS and enjoy better FPS and input response. Disable dyntick and you will get even better input response. On Windows 8/10 you just do bcdedit /set disabledynamictick yes in admin CMD and reboot. Windows 7 is safe from this "feature." On Linux I believe it's a kernel parameter so it's not so trivial to test.

>Complaining about 400 nanoseconds latency
Holy shit Intel shills are getting desperate.

Extremely based, I never thought to visualise it like that. Isn't it sad how much we throw away?

>>>/4chan/
>>>/reddit/

congragulations, you argued semantics and missed my point. any perceptable input lag is solely caused by slow software, not fucking #T

No, there is no such point. Vsync is an engineering problem and this is why peons shouldn't be talking about it. The whole reason we have "no vsync" option in the first place is at the core because the game failed to meet the 1/refresh rate time constraint. So now you have a game that varies from 30fps to 300fps on a typical mid range box and it looks like utter shit because such large framerate variance causes judder. There are no guarantees with vsync off. If you run at 120FPS on 60Hz and these were actual real frequencies (instead of approximations which you get in any real system), you would get two tears that stay perfectly still in the middle of the screen. If the frequencies were slightly off (which happens in practice), the tears will slowly move down the screen. Even with tears not staying in one place (e.g each frame has the tear in a completely random spot), it will still look like shit, even without judder. Vsync causes other artifacts not just tearing: for example vertical compression, or hiding strobed animations (e.g a sprite flashes between two colors, but now because of "no vsync", it is just one color). Now if you use a real monitor like a CRT or an LCD in strobed-backlight mode, unless you have vsync, you will get double images or worse (triple image, quad image etc).
Now that peons have memed their invalid claims about vsync for 20 years we have the misconception that is VRR (gsync/freesync) in every monitor for no reason.

are you the most retarded fucking faggot who ever stepped onto a shitty tech forum? CPUs don't cause input lag. If they do, it's because they merely perturbed a bug in your dog shit game. Do you know what "input lag" means in the context of this thread? We're talking milliseconds or higher. Humans don't feel microsecond lag and if they do there is no system that remotely supports that, due to limitations of the screen and input devices.
you stupid nigger, even if an interrupt took 1ms to process, that would mean adding 1ms of latency to a keyboard or mouse event.
noone cares about some windows issue

and now some nignog is gona chime in:

It actually does help with some variations of vsync, by giving the algorithm more choices of frame. In some cases, a framerate of 3x actual output frequency is recommended. Note there are also some situations in which a framerate a fraction of an entire FPS just above or below output frequency is best.
The point of VRR is to shrink granularity of the vsync window, beneath the usual vsync limitation of "evenly divisible by current refresh rate or bust". This eliminates the need to jump from 30 to 60 to 120 FPS at 120Hz, instead seamlessly (lol) tracking the game's FPS as if vsync was off, but without tearing.

There are also other advantages to VRR, such as the ability to sidestep clunky and slow modeswitching when watching video, to see flawless judder-free playback in weird framerates a lot of content uses (i.e.: 23.976, 29.97).


Has nothing to do with tearing, it merely reduces input lag. For this reason, it can also be combined with vsync.
Input lag isn't caused by vsync, but by double buffering. Vsync using correctly implemented triple buffering eliminates such latency:
anandtech.com/show/2794
Maybe there are some scenes with little motion, where higher detail is more important, along with other scenes where rapid motion and player response are crucial in preference to detail? Or, perhaps, the game simply is flaky, and the hardware/drivers/OS/game engine handling variable FPS elegantly minimizes the impact of such bugs.

see


"b-but windows doesn't matter :(" And neither do you.

Having such a high variance of framerate precludes using the monitor in a strobed mode, which means you get motion blur, which means your retarded 4K monitor is indistinguishable from 480p whenever the image is moving (at all). If VRR worked on strobed modes, the refresh rate variance would be something like 4 (in my head calculation), not 100. Also worth noting: With VRR you also still need to cap the game refresh rate below the monitor's max refresh rate or you get input lag.
The point of being able to drop to 59FPS on a 60Hz monitor without quantizing to 30FPS sounds like bullshit. What I'd guess really happens most of the time is the framerate is dropping down to 40 or some shit (and then the double buffered vsync quantizes this to 30FPS, slightly worse) and the user throws his hands up "OH NOES ITS THE DOUBLE BUFFER I READ ABOUT ON ANANDTECH!". I had an explanation of why VRR doesn't solve input lag any better than beam racing, but I forget, I don't really think of VRR much since it has no appeal.
well beam racing solves tearing since the whole point is to do it on the vsync signal. it's also the only way to have all of 3: 0 input lag, synchronized video, 0 motion blur
Well, input lag is caused by "double buffered vsync", and all forms of "triple buffered vsync". If we wanted to be correct, we'd only mention the word "vsync" when talking about a pin on the VGA cable (and the corresponding signals DVI,HDMI, and DP all have). Unfortunately these shitty hacks aren't well named enough.
>anandtech.com/show/2794
Absolutely false. But that's a common meme thanks to shitty authorities like Anandtech. Triple buffered vsync can either be FIFO or the "proper" way that article refers to (IIRC). FIFO just means you get even more lag than double buffered vsync. "Proper" triple buffering just means you render new frames based on most recent inputs as often as possible (100% resource utilization), and when the GPU is about to send a new frame to the monitor, it selects the newest of these rendered frames. I did the math before and even at 300FPS (the rate of rendering new frames, each of which the GPU may or may not display) on 60Hz you still get something like half the lag of double buffered vsync. Also "proper" triple buffered vsync gives you judder, so it's still not a proper synchronization method. This isn't a tiny issue, it's just as bad as if tearing. On a CRT or strobed-backlight LCD, judder causes double (or triple or worse) images (but even on a normal LCD, judder still looks bad).
The entire logic of "proper" triple buffered vsync is: "Oh, we can't guarantee a solid 16.66ms render time, so we'll render frames as fast as possible (in the range of 300FPS to 1000FPS, but still drop to 20ms once in a while) to try and have frames that coincidentally are sampled a tiny time before the vsync signal arrives."
Not in any game where the camera is tied to the mouse.
you wont see any of that detail since you're using VRR which means motion blur. if the camera moves in the slightest the quality drops to 480p. modern LCD applications can only display static images with high resolution, nothing else
you're right, VRR is to minimize the impact of shitty software, and to attach an extra $200 fee to new meme monitors lol

BTW, this means that by using "proper" triple buffered vsync, you're admitting that you will still take longer than a frame (20ms vs 16.66) to render sometimes, so you will still get input lag sometimes. Otherwise, you'd be using beam racing (since it does everything 100% perfectly if you can guarantee to meet the framerate). Similar stories apply to VRR IIRC.