Look your CS 101 data structures class using big O notation is not an accurate description of how caches work.
Zero terminated "strings"
in C, char* is also used for byte arrays.
this is a programmer error, but it could be prevented if the design of the language and the stdlib was less shit.
programmers will always make some errors, but some of them can be prevented entirely as a class.
No. But performance always takes priority.
And I think we should listen to
's practical advice and not some stupid theory developed by java shitcoders at some university.
it doesn't.
amortized cost of adding an item is still O(1).
Adding an item to the middle of a vector is not amortized to O(1).
You can change how often a vector reallocates itself, but really, the default behavior is sufficient for most implementations.
neither in the linked list if you need first to find a place where to insert — you'll need O(n) traversal first.
Yours neither, loser. You literally made a bold statement without backing it, or providing proof. Your nodev ass can't even write a reverse polish calculator, LOL.
Again you keep using all these fucking big O notation when talking about the speed of these datastructures. The real world does not follow big O. Iterating over a vector thats all in one page is thousands of times faster than jumping between pages where linked list nodes are allocated despite the same time complexity.
Yup. This is why compiler warnings exist when you try to do implicit conversion, and this is why Apps Hungarian Notation is useful.
1st year CS theory that you ought to know if you want to be taken seriously here.
That use cases where vectors are indeed better.
Do you think data structures stop existing outside of RAM?
Filesystems make extensive use of linked data structures.
In the middle, or anywhere besides the end. Dynamic vectors can be used somewhat effectively as stacks because of that, but that's about it.
L M A O
M M
A A
O O