What do anons think about test driven development?

I mean that's more or less agreeable but you seem to downplay how often you can't manually test things. As common things as referenceing lots of external state, or compiling for multiple platform, or lots of internal state, or generally anything even moderatily complicated manual testing is a real pain in the ass. Also the debug information is so useful, not only does it tell you that something went wrong like you would know if your program flat out doesn't work but it tells you where your program went wrong. It's difficult to imagine something more powerful than that when dealing with programs with runtime errors.

You're insane. Making non-trivial changes to a complex piece of software with no way to verify you didn't fuck some edge-case causes code rot.

Well it goes wrong in the section I just modified.

If it's that complex it will be easier to manually go through the edge cases as it would be a LOT more work to write a test.

Modify the JVM. Manually test all edge cases. Try to finish before dying of old age.

That's a damn good meme there.

Attached: IMG_20180718_074054.jpg (1079x213, 51.94K)

It's nice when you're writing code to conform with specifications, but otherwise it is best for testing points of interface such as api/abi endpoints.

Tests are fine. Writing them before actual code is bullshit. Testing completely trivial things is also bullshit.

Aside from what the other anons have said, adopting TDD and its derivatives like BDD provide a major benefit for open source projects in CY+3. That being that the sentence "the automated build system is misogynistic because it rejected my merge request on the grounds that all my tests failed, we need a CoC to remedy this" sounds completely insane to 99.9999% of developers and serves to act as another bulwark to protect people who actually care about meritocracy and their projects from the SJW menace.


If you know exactly what the end result is going to be then you are working on an already solved problem, in which case you are wasting your time.


You clearly have no fucking idea what you are talking about.

The seL4 test suite takes over 8 hours on a high end system to run all tests, and that's only ~10,000 LOC and >200,000 lines of tests. To date its the only kernel proven to be functionally correct.

The most complex codebase I have worked on in terms of testing was one which had so many possible permutations that it would take months to test them all, and since it was for an application which required high reliability we needed to make damn sure it worked. Through a combination of architecting the software to make testing easier and only testing a subset of permutations inhouse (the software would run a test on the configuration chosen by the customer during installation) we managed to get it down to 3-4 hours including a small amount of benchmarking, even then this would still cause development bottlenecks since the build system would run the tests on every single merge request to ensure no bugs were being introduced.


Prototyping things before you have written any tests is fine, but the minute you start writing code which you intend to ship to customers its better to already have at least some tests in place. A good development framework will have tests and benchmarks run on sections of code with every merge request so that regressions can be detected and remedied as they happen, the worst thing that can happen is a small bug or slow code introduced early on and only be realised later on in development since you risk having to rewrite large sections of code to fix it.

How are you writing tests and why don't you know how to do it an easier way?

Unit testing exists for a reason; is it that hard to verify that, given a certain input, you get the expected output? Is it that hard to write a test suite?

Attached: 6f8ef1403f4c37f0dc491085ccb51f871ff9dedabcbcdc67a51a3c8e32c01a2d.png (472x415, 36.84K)