Does developing for consoles encourage better programming habits?

PCs are constantly receiving new hardware benefits several times a year, whereas consoles only receive hardware updates maybe once every 2-3 years (and this is a fairly recent trend as well, older consoles from the 6th generation and earlier received no hardware updates throughout their lifetimes). Programs today are filled with bloat, and this comes primarily from computers being fast enough to bruteforce programs through raw power, meaning programmers can be lazy and not debug their shit properly. On the other hand, consoles don't have cutting-edge technology like they did when they were released, meaning developers have to be a lot more meticulous with programming and implement features to speed up performance, such as microcoding for a specific hardware set. One example of this is Conker's Bad Fur Day on the N64, which many consider to be the extent of what the console could do given enough time and effort was put into developing for it.

My point is, have technological advancements encouraged developers to be lazy, and would it be a bad idea to ensure that developers can make their programs run well on older/obsolete hardware first before testing it on modern hardware?

Attached: Chimp.png (322x436, 294.42K)

Other urls found in this thread:

gamasutra.com/view/news/310660/Memory_Matters_A_special_RAM_edition_of_Dirty_Coding_Tricks.php
polygon.com/2017/6/22/15820540/crash-bandicoot-an-oral-history
news.ycombinator.com/item?id=9737156
en.wikipedia.org/wiki/Memory_management_controller
twitter.com/NSFWRedditGif

No, not at all. You don't make software that only runs on brand new top-end hardware because that would alienate a big portion of your potential user base.

Absolutely, it requires the programmer to be more diligent with resources. Nu-programmers today code with a mindset of infinite resources and it shows. That said, your diligence will harm your productivity.

...

Sure!
However I think the most obvious examples are mobiles, not consoles and computers. I saw a lot of very simple games which lag miserably on middle range devices, which are already way more than enough though, but the guys just tested them on $900 powerful flagships and didn't care much, no optimisation whatsoever...
On the other hand, I remember a few stunning games which ran perfect on my old single core $40 chinese android handlheld console, that was a great job from guys who knew what they were doing !
I'd force bad dev to develop a game for the Sega Megadrive as a punishment, now that would teach them good prog habits. ;)

Why bring up consoles and not macs instead? They also have stale hardware, but it doesn't stop devs from churring out webshit.

Britbong detected.

Personally, it's better to write twice as slow but twice as good instead of the other way around. You probably look at programming (or koding) as just a way to earn money and that's disgusting.

What malfunctioned in your low IQ brain to infer that last statement?

In my early career I did a lot of programming on microcontrollers. That really forces you to make efficient code.
Efficiency of course depends on your limitations and requirements.
A lot of the stuff I did the limiting factor was actually flash memory for the program. The code didnt need to execute very fast but it did need to fit in the memory of the microcontroller.

In a way, but I think it also makes it easier, because you can write for a very specific set of hardware instead of writing something more robust that functions well on a wide variety of hardware. But writing for consoles teaches you how to optimize within a set boundary, and in some ways encourages innovation in order to do more with the same hardware. For an easy example, look at console games developed when a console is new vs when it's been around for a while. Most obvious improvement is graphical, obviously, but they improve in other ways as well learning various tricks and techniques to get the most out of what they're working with.

So it's a toss up, really.

This contradicts itself, because as you said, they increasingly have to learn the tricks to keep doing more and more with the same resources.

The nuprogrammer of today actually does code to a specific set of hardware. It's a magical machine with infinite ram and an always fast enough processor.

...

That statement is generally true. If you are diligently optimizing for either space or run-time your task will take longer to complete. I have no idea how you could interpret it as a viewpoint that programming is just to earn money, unless of course you really are retarded.

(((You))) even thinking about productivity means you are restrained by time. Hobbies aren't restrained by time. You are speaking from the perspective of a wage cuck.

You are painfully stupid.

No (((u)))

No. Developing on older hardware does.
If you code on a PC from the 2000's, your programs are going to have less bloat than if you're writing for the XBone.
Ideally, people should learn to program on machines from the 80s-90s or at least emulate them.

No, because developing for console is akin to developing for one single specific computer and ignoring that different hardware exists.
Look at the kind of horrors, created by "clever" console programmers, that emulators have to now understand and work around.
Undefined behaviour being used every-fucking-where because it is pretty defined in practice? Check.
Error management missing because the error data doesn't cause too many visible issues when treated as game data?
Check.

And let's not even talk about security, on consoles security is not a concern because the user can't run anything but pre-approved good goy programs, so why care about good practices?

Don't take your time (even your hobby time) for granted, for you are not promised tomorrow. One moment you're on a nationwide tour evangelizing God's own operating system, next thing you're dead in a car crash. Don't take this as an argument to Kode in JS though.

\thread

You haven't achieved mastery over your machine until you have utilized undocumented instructions. As for the 'emulators', their job is to copy the hardware, bug for bug.

Terry played chicken with a train.

Tell that to the developers of Unreal Engine 4

AKSHUALLY...
I'm fairly certain that at least the original PlayStation had several minor hardware revisions. Because of this, a lot of low level hardware functions had to be accessed through the BIOS so as to ensure every game would be compatible with later hardware revisions. Prior to this, console programming was the wild west, with all kinds of undocumented features and quirks bugs being used in insanely clever ways to do impressive things. That was lost once consoles started having BIOS and later OSes taking control of things. Today, a console is just a computer with a hard limit on processing power. If it doesn't downright use a PC graphics API like DirectX, it will at least be a proprietary yet very familiar imitation of one. Not very interesting stuff to be honest.
Look up "the game hut" on jewtube. It's this guy who worked at Traveler's Tales in the 90s. He has tons of videos explaining a bunch of ingenious stuff they did with the Sega Genesis and other systems.

Don't forget the obvious example of the N64 and the extension back.

Should have to do a project on the VCS2600 Where you have to count every clock cycle and learn to live with 128 bytes of ram.

Attached: Schematic_Atari2600A_1000.png (700x318 106.37 KB, 74.28K)

The N64 also had the 64DD (that was only released in Japan). While there were generally no actual hardware expansions released for most of those consoles: they all were built with expansions in mind. The Genesis had a CD expansion, and the PlayStation started out as a CD expansion for the SNES. And the NES and SNES (at least) could have extra chips added to the cart. While the hardware was far more captive, it was not nearly as captive as OP seems to think, and really most new hardware came about because it was easier to market a new system than an expansion to an old system.

Both the SNES and NES also had an expansion connector that went largely unused other than some Japan-exclusive weirdness.

The problem with game consoles is how they use weird CPU architecture and then you're basically learning platform-specific shit that is guaranteed to be thrown out in a few years, and it'll be obsolete and useless. PS4 and Xbone are the only recent examples that this isn't true for (they're both x86), but it's still true that a lot of console stuff ages poorly. You can still run old Java code on a new computer, using old Apache Commons shit. But the problem with new and disposable appliance-type stuff is that they come with their own proprietary APIs and things that come and go all the time. Hell, even doing mobile app development is similar. Too many different SDKs and OS fragmentation for Android, and shit used to be done in Java but now they're switching to Kotlin. And Android IPC is annoying as hell. As for iOS, you used to do things in Objective-C but now they're switching to Swift, and so on.

Consoles are only good if you wanna be a vidya developer, but that's just a neckbeard pipe dream. And in terms of tech and practicality from a development standpoint, they suck.

It was only true for PS3, IIRC, with their secret special multple cores sauce.
Otherwise consoles used same standard off-the-shelf parts. Not x86, but MIPS, Arm, PowerPC, whathaveyou.

Attached: Maniac_Mansion_artwork.jpg (220x264 28.06 KB, 31.89K)

Attached: Half-Life_Cover_Art.jpg (365x273 118.67 KB, 4.36K)

RCT is proof every game needs to be in assembly.

That was my first PC game when I was nine. I could have sworn I still had it in its box somewhere, but couldn't find it when I went looking for it some time ago.

Attached: 1362302773901.png (640x360 24.12 KB, 3.83M)

Utilizing undocumented anything is short sighted.
Yes, and for that they're the best source of info to see how shit console code is.

What's more impressive is that it apparently will run on a 486, albeit significantly slower than on pentium 2+ hw.

RCT is made in assembly?

Attached: Hold Up.jpg (600x576, 20.93K)

Different concept, but most of the game cartridges for the NES, but for a small few, were expanding the allowable ROM size with bank selecting. The Game Genie was also interesting in the way it hijacked the NES address bus.

Having custom chips in your console games is really a lost art. I think some NES/Famicom games even had upgraded sound. The SNES of course, pretty much had custom hardware in almost all games. apparently the SNES version of dungeon master used a chip to convert amiga graphics files to snes It's just too bad it drives up prices and makes emulation a pain.

Yeah, 99% of RCT is hand-coded x86 assembly and the other 1% is C.

There was an interesting article on Gamasutra which I can't find right now, but basically it was by a senior dev from the PSX days. He said that on projects he was a part of, no matter how many constraints the team was aware of, and how much effort they put into optimizing at crunch time the game would always be over memory budget.

So at the start of a new project he was leading he would allocate some memory that was "this is crucial boot code or some shit, don't touch this" and the team would ignore it. Come crunch time when they're predictably 5% over budget, the project lead would delete the preallocated memory and suddenly they've got a shippable game.

There's a term for it that I forget; A project expanding to consume all of its resources. Even in a heavily constrained environment it still happens. Seems the best way to counter it is proactively.

The unreleased NES Hellraiser game had a few iterations. One of them was to have a Z80 coprocessor and 4MB RAM on the cart. The NES itself only had 2KB. Wew.

Attached: hellra65.jpg (443x473, 52.97K)

People are horrible at spidering
gamasutra.com/view/news/310660/Memory_Matters_A_special_RAM_edition_of_Dirty_Coding_Tricks.php
polygon.com/2017/6/22/15820540/crash-bandicoot-an-oral-history
news.ycombinator.com/item?id=9737156

Attached: DoMnAqlVAAE_U-1.jpg (923x1200, 118.95K)

no. modern console are just web browsers that load webgl programs into their local appstore. literally all any modern gamedev is create assets to load into their game engine and some scripts

All written by one guy.

Holy hell. I'm writing a NES emulator at the moment, and this is going to take a long, long time. The varying of cartridge hardware (specifically their memory management controllers MMC) you have to handle is quite daunting.

en.wikipedia.org/wiki/Memory_management_controller

It's no surprise that once the technical barrier to entry for gave development plummeted, all the freaks turned up to infest it.