Fixing weak typing fingers5/2/2023 They can lead to a false sense of security. I've never seen an example in which static typing made a language easier to learn. And Perl has a whole bunch of static complexity - labyrinthine rules about what you're allowed to say, and how, and when - which makes harder to learn than Ruby or Python. It's harder to learn OCaml than Lisp, and harder to learn the Nice language than Java. This means it's harder to learn C++ than C or Smalltalk. You're stuck rearranging your code randomly until it gets past the compiler. ![]() ![]() You can't even use printf-debugging to see what's going wrong. ![]() Static type systems are rigidly picky, and you have to spend a bunch of time learning their way of modeling the world, plus additional syntax rules for specifying the static types.Īlso, static type errors (aka compiler errors) are especially hard for language learners to figure out, because the poor program hasn't had a chance to run yet. It's much easier to get started with a dynamically-typed programming language. (This is only a serious problem in Java, which has no type-aliasing facilities.) And as I mentioned above, you also have to spend more time forcing your designs to fit the static type system. Type annotations also take up real estate in your source code, so they're a little slower to enter and maintain. You spend time up front creating your static models (top-down design), and more time updating them as requirements change. About half of all design patterns out there (not just the GoF patterns) appear to be ways take perfectly natural design ideas and twist them to fit into someone's static type system: recipes for pounding square pegs into round holes. You can come up with similar complaints about every static type system out there, from Ada to C++ to OCaml to, er, Z-language. Any time the most natural design involves these things, you're stuck trying to mutate it into a design that fits Java's type system. In Java, for instance, the type system doesn't let you do operator overloading, multiple inheritance, mix-ins, reference parameters, or first-class functions. They artifically limit your expressiveness. Here are the disadvantages of static types, as I see it: People can look at an API or schema (as opposed to the implementation code or the database tables) and get a feel for its overall structure and usage patterns. Static type tags simply provide more "purchase" for compiler-like tools: more distinct syntactic elements available for lex-only tools, and less guesswork needed for semantic analysis. This includes automated doc gen, syntax highlighting and indenting, dependency analysis, style checking, and other code-looking-at-code kinds of stuff. Static type annotations make it easier to do certain kinds of automated processing of your source code. ![]() But you can still specify type tags in cases where it clearly aids readability - something most dynamic languages don't allow. This advantage isn't as true in languages with type-inference like ML and Haskell they evidently feel that requiring the type tags everywhere is a negative. With verbose type systems like those of C++ and Java, you can look at the code and see the static types of your variables, expressions, operators and functions. And compilers can make better decisions when they have more precise information available about variable and expression types. Static typing offers more (or perhaps just easier) opportunities for performance enhancements.įor instance, it's easier to create intelligent database indexes if you have a rich, thorough data model. Static types provide constraints that can detect some type errors early, before the program runs (or at the time a row is being inserted/updated, or an XML doc parsed, etc.) Here's my stab at a list of the main advantages of static typing:
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |