What do anons think about test driven development?

What do anons think about test driven development?

Attached: kystpmagk0711.png (1080x1305, 2.35M)

Other urls found in this thread:

sqlite.org/testing.html
github.com/onqtam/doctest
twitter.com/SFWRedditVideos

more like tard driven development

t. Torvalds

It has some use, for example if the base software has some deprecated dependencies and need to change a bunch of things.
Create a bunch of tests, make changes until what needed replacing is changed, and validate with test.

Otherwise

I like the idea in theory, but in practice it's a pile of crap.
Maybe there are development teams that made it work for them, but I've only seen it waste time with subpar tests that people hacked together just to be done with it.
And when they tried to make them good, management was bitching about how slow everything went.
In the end they're not worth it. Maybe on software with a very long lifespan, I don't know.
I'd suggest only writing tests for the complex parts, while skipping the mundane parts. It's not how it's supposed to be done, but it'll be less of a timesink and more useful.

I'm not experienced enough to have a well-formed opinion, but I think instead of speedrunning the writing of your program (probably leaving dozens of TODOs in the process) and then trying to unbreak it is a terrible plan of development and results in bad code.

Eh, I fucked that post up a bit.

If you know exactly how something is going to work before programming it, then you shouldn't have to test it. And if you don't know how it's going to work beforehand, then writing tests is just a complete waste of time.

It's wonderful at getting rid of regressions and trading off runtime bug hunting for compile time test writing. The main problem lots of folks have with it is its inability to integrate well within workflows. Opening a seperate file and writing tests for my program is a bit of a pain especially when they get more complex and I'm thinking more about the behavior of my test than the behavior of my functions. Contract driven design with automatic procedural test generation like in Eiffel would be wonderful to have in more languages. Not only does it allow you to maintain your workflow by staying on the topic of writing functions to perform the task at hand but it also allows you to gain greater test coverage in a number of situations while doing this.

Automated unit tests are pointless for most well designed software. 99% of the time you'll uncover the problem when you are manually testing the changes yourself.
In cases when you can't test the changes yourself having some form of testing is important.

I mean that's more or less agreeable but you seem to downplay how often you can't manually test things. As common things as referenceing lots of external state, or compiling for multiple platform, or lots of internal state, or generally anything even moderatily complicated manual testing is a real pain in the ass. Also the debug information is so useful, not only does it tell you that something went wrong like you would know if your program flat out doesn't work but it tells you where your program went wrong. It's difficult to imagine something more powerful than that when dealing with programs with runtime errors.

You're insane. Making non-trivial changes to a complex piece of software with no way to verify you didn't fuck some edge-case causes code rot.

Well it goes wrong in the section I just modified.

If it's that complex it will be easier to manually go through the edge cases as it would be a LOT more work to write a test.

Modify the JVM. Manually test all edge cases. Try to finish before dying of old age.

That's a damn good meme there.

Attached: IMG_20180718_074054.jpg (1079x213, 51.94K)

It's nice when you're writing code to conform with specifications, but otherwise it is best for testing points of interface such as api/abi endpoints.

Tests are fine. Writing them before actual code is bullshit. Testing completely trivial things is also bullshit.

Aside from what the other anons have said, adopting TDD and its derivatives like BDD provide a major benefit for open source projects in CY+3. That being that the sentence "the automated build system is misogynistic because it rejected my merge request on the grounds that all my tests failed, we need a CoC to remedy this" sounds completely insane to 99.9999% of developers and serves to act as another bulwark to protect people who actually care about meritocracy and their projects from the SJW menace.


If you know exactly what the end result is going to be then you are working on an already solved problem, in which case you are wasting your time.


You clearly have no fucking idea what you are talking about.

The seL4 test suite takes over 8 hours on a high end system to run all tests, and that's only ~10,000 LOC and >200,000 lines of tests. To date its the only kernel proven to be functionally correct.

The most complex codebase I have worked on in terms of testing was one which had so many possible permutations that it would take months to test them all, and since it was for an application which required high reliability we needed to make damn sure it worked. Through a combination of architecting the software to make testing easier and only testing a subset of permutations inhouse (the software would run a test on the configuration chosen by the customer during installation) we managed to get it down to 3-4 hours including a small amount of benchmarking, even then this would still cause development bottlenecks since the build system would run the tests on every single merge request to ensure no bugs were being introduced.


Prototyping things before you have written any tests is fine, but the minute you start writing code which you intend to ship to customers its better to already have at least some tests in place. A good development framework will have tests and benchmarks run on sections of code with every merge request so that regressions can be detected and remedied as they happen, the worst thing that can happen is a small bug or slow code introduced early on and only be realised later on in development since you risk having to rewrite large sections of code to fix it.

How are you writing tests and why don't you know how to do it an easier way?

Unit testing exists for a reason; is it that hard to verify that, given a certain input, you get the expected output? Is it that hard to write a test suite?

Attached: 6f8ef1403f4c37f0dc491085ccb51f871ff9dedabcbcdc67a51a3c8e32c01a2d.png (472x415, 36.84K)

That seems very unproductive. Every time you make a trivial change you would have to wait 8 hours. I'm sorry, but manually testing your change is going to be much faster.
You make it sound like that any change can break any part of the codebase. In reality a change is going to only effect a localized section of your code. It's not like fixing a font rendering issue is going to break your networking code.

A. The project is complex.
B. It's not written in a way you can write tests meaning that you will have to spend a long time creating proper mock objects for potentially hundreds of different objects.

sqlite.org/testing.html

Nigger I work with thinks that asserts exist to verify conditions on user input. He's an okay idea guy (has not entirely shitty ideas), but sometimes he just throws shit at everyone with the code he writes.

OP
It can be helpful. I have found stupid mistakes in my code from the brain-dead tests written for it. The problem is that a good bulk of unit tests are written for code that is trivially correct. That 80/20 or 90/10 rule: 90% of bugs are in 10% of code, which means that 90% of unit tests cover code that is statistically unlikely to be incorrect.Most bugs peek their heads when you start integrating the pieces together and find that somebody assumed something that wasn't true.

Example from nigger I work with:
A) Read in information
B) Validate information
C) Do some transformations
D) Report validation errors and prompt to continue with invalid data
Nigger I work with was working on step C, and decided that since validation had already been done, that he would never get a NULL pointer and didn't have to check for one. The requirements are that the program can continue with invalid input, a requirement he disagreed with. All the little pieces work as tested, however the full stack mysteriously crashes. His defense: "Is that so? I didn't know that that was a requirement! I didn't notice that every other function around the one I modified checks for NULL pointers."

Outdated methodology; the good parts such as iterative development and acceptance testing-as-living-documentation are better implemented in BDD for large organizations, because all stakeholders can read the tests (not just the devs) and automated testing is put in reach of non-programmer analysts, making better use of everyone's time. For smaller projects, it's overkill. There's just no use case for it.

my company has the ttd mandate (I don't know what else call it) buy nobody does that in reality, tests are written afterwards and it's just natural. I agree 100% with Torvalds on tests in code

But linux has the benefit of not having deadlines and the need to sell the product, unlike 5he corporations. They hire such code monkeys you literally cant go without tests. And such bullshit like TTD was invented because of that (just like java)

I also hate that all of our code has all the methods virtual just so they can be mocked in the gay Google test and it's just to check if a function is called.

void func1 ()
{
//do something in this function
func2 (); // virtual just so you can check if it was called in your UT
}

Tell that to the NT kernel developers.

You're almost certainly doing something wrong. Either write a preprocessor macro that automatically makes a mockable function, or get hardcore with your test suite and have it look at debugging symbols to find out of a method got called.

I suggested the debug symbols approach but got turned down. This is not my own code, Jesus Christ user...

it sucks massive cocks

t. user who never developed a single piece of software
and no ricing arch and writing shitty scripts does not count as developing software

Agreed, writing tests before the code implies you know what the API/spec of your code should be, which is rarely true.

E.g. if I'm refactoring some private functions, then I will write the new version, iterate until I'm happy with the API of the new version (i.e. it's simpler and results in simpler code where it's called) and the existing tests pass, then write tests for the new function.

The same is true with user facing APIs except I will iterate on sample user code.

This is pretty much what I've been thinking lately.
Write code -> Write tests while I refactor -> Optimize and/or Simplify
It seems to me that TDD makes the most sense when you're refactoring because by the time you're refactoring you know more or less what your functions are going to do but you haven't started iterating on functions yet. TDD is really made for makeing iteration easier.

Coming from the QA side, TDD is fan-fucking-tastic when you have developers on a SOA application, particularly in the case of microservices. I only end up with the difficult fun problems in those cases not the fucking retarded curry-niggers who write shit code and don't even test it. Don't be a curry nigger.

It needs to be zero-cost during runtime.
github.com/onqtam/doctest

Architecting based on known requirements will give you a good enough idea of what the API/spec should be before starting.

You shouldn't be writing tests for private members, friend isn't something that should be used often.


TDD makes the QA process far more efficient, every thorough test written results in the relevant piece of code not needing to be manually checked for regressions every merge (unless the spec changes).

A good build system will run code linters and all sorts of tests and benchmarks on a variety of hardware configurations and reject anything which doesn't meet standards or regresses the codebase.