Talk:Type inference
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||
|
To-do list for Type inference:
|
Question
[edit]This is a quote from the article
The ability to infer types automatically makes many programming tasks easier, leaving the programmer free to omit type annotations while maintaining some level of type safety
Why does it say "SOME" and not "A COMPLETE LEVEL OF SAFETY"? Are there downsides that someone would like to discuss? —Preceding unsigned comment added by 194.73.67.254 (talk) 18:16, 4 December 2007 (UTC)
- Type inference protects against errors such as using a Double as a Char, but cannot usually detect the difference between runtime values of the same type. More advanced dependent type systems can embed more kinds of safety in the type system, such as static array index bounds checking, but these systems do not have complete type inference and may require annotations or even interactive theorem proving. Of course getting more out of type systems and type inference is ongoing cutting-edge research, but there are also genuinely undecidable questions. --Ørjan (talk) 07:02, 5 December 2007 (UTC)
- I've just changed this, because I agree with the original unsigned comment. Generally speaking, if you've got a language with type inference, you'll get the same amount of type-safety as the underlying type system gives you. I think a more compelling argument in favor of the original wording would be if you wanted a type that was more specific than what would be inferred: e.g. if you wanted foo to be only int -> int instead of 'a->'a , you could view that as a decrease in type safety. EvanED (talk) 03:33, 6 April 2010 (UTC)
- I've changed the last sentence of the opening paragraph for reasons suggested in earlier in this conversation: because type inference and type safety are effectively unrelated. The degree of type safety of a type system dictates the degree to which the language guarantees that a well-typed program cannot throw a run-time type error; type inference is a language feature that permits the programmer to omit type annotations while still providing type checking. OCaml could require all static expressions be annotated with their types by the programmer, skipping type inference altogether, but it would still be just as strongly type-safe; type inference really just makes it easier to write OCaml code. I think it's easy to conflate the two because type inference is most commonly found in statically typed languages with strong type safety guarantees; they'd probably be completely unusable without them. But saying "...leaving the programmer free to omit type annotations while maintaining roughly the same amount of type safety as would be provided with explicit type annotations." suggests that type inference has an impact on type safety at all, which it doesn't.
- Clegoues —Preceding undated comment added 03:29, 2 February 2011 (UTC).
- But type annotations can add safety even to a language with mostly inferred types. In ghc Haskell it is often the case that you can get more refined type checking if you are willing to add some annotations for things that are unfeasible to infer completely. A simple restriction such as Int -> Int is just the beginning of it, there are also such things as phantom types, polymorphic recursion, higher rank types... the Haskell people seem dedicated to finding out how to do as much checking statically as possible, while still keeping the resulting system mostly inferred. In my understanding the extended Haskell type system is designed such that you often don't need type annotations when you use functions with advanced types, even if you need some to define them. --Ørjan (talk) 03:25, 3 February 2011 (UTC)
- I see your point, but stand by my edit. While annotations can increase the precision of inferred types and thus cause the type checker to enforce stricter rules, they don't cause the language in and of itself to be any more or less type-safe. Type safety/checking/type inference are always discussed interchangeably, so it's tricky to settle on firm definitions, but type safety in my mind is a function of the types of errors that can be prevented by the type system/checker for a programming language, not a feature of a particular program written in that language. Clegoues (talk) 04:49, 5 February 2011 (UTC)
- There's some confusion here, but the edit stands also IMHO. The original statement argued that requiring all type annotations (by giving up type inference) can increase type safety, but that is confusing, especially in the introduction. That's because existing languages without any type inference do not support advanced typing features. The article might and should later discuss the tradeoff between complete type-inference (as in ML) and partial type-inference (as in Haskell with extensions or Scala).
- A type system is safe wrt. a class of runtime errors if typechecking statically prevents those errors: e.g. a typechecker rejects 1 + "hello", but accepts head []. In this technical sense, a type annotation cannot increase type-safety: if you give type int -> int to the function foo x = x, it's not for type safety reasons - you can't introduce any error prevented by type-safety, and any other error was not statically prevented before. It could well be for abstraction reasons: you might want to change the function to e.g. foo x = x + 1. But that's not about safety. It's about choosing which are your interface, and hiding the fact that your current implementation of foo also works on strings.
- Many Haskell extensions allow instead typing more programs: for instance, a program using polymorphic recursion might contain no runtime errors ever, but it will still be refused by a ML compiler. Haskell allows you to write this program and type it, but has to give up type inference. The alternative is to use a language without static type-checking (Lisp, Scheme, Python, Ruby...). If you want to use subtyping and type inference, some problems also arise, depending on the supported type of subtyping.
- However, a programmer can prevent additional runtime/logical errors by relying on the type systems. For that, a different data representation might be needed, and it might require advanced type features. For instance, you might want a datatype which can only represent perfect trees, where all leaves have the same depth, because in your program other trees would be logically invalid. To do so you need so-called nested types, where you allow polymorphic recursion within a type:
- data PTree a = PLeaf a | PNode (PTree (a,a))
- But that's again about specifying your interfaces, not about type-safety in the technical sense. While type-safety is often used in a broader sense, that sense is not defined. Similarly, Pierce explained that there is no clear definition of e.g. strongly- or weakly-typed language. --Blaisorblade (talk) 12:16, 6 May 2011 (UTC)
Scope
[edit]Type inference is historically linked to funbctional language but I don't think it is limited to them.
- Quite right. It's actually just part of lambda calculus, which isn't even mentioned here. That it happens to find good use in programming languages is nice, but not the most important thing. However, properly rewriting this article takes more time than I have available—sorry, Wikipedia. --131.155.69.144 12:21, 8 Mar 2005 (UTC)
- You make me sad. So be it.
- But what I want to know is whether the adumbrated algorithm is also used in dynamic typing systems. --Maru (talk) Contribs 01:06, 14 November 2005 (UTC)
- So-called 'dynamic typing systems' are not actually type systems per-se (in the classical theoretical sense), but are rather a method for 'tagging' (sometimes primitive) memory-resident data structures in such a way as to influence their runtime evaluation (e.g., execute nominally or throw an error if something 'bad' is done with the structure). Type inference and the unification algorithm are not used here. There are some systems which propose to combine elements from static type system with these dynamic tagging approaches to achieve some sort of hybrid system though, and in those cases you may see some elements from static type inference arise. 70.36.29.121 08:23, 30 January 2006 (UTC)
- And the statement that "Duck typing [is] an analogous concept in languages with dynamic or weak typing" grates on me like nails on a blackboard. It's just wrong on so many levels, including not being typing. However, I do appreciate the sentiment, and will try to write up something not so completely bogus that preserves it. cjs@cynic.net. —Preceding unsigned comment added by 217.36.35.145 (talk) 10:09, 6 September 2009 (UTC) (Later:) No, I just took it out. One might be able to make the claim that something like Haskell's type classes combined with type inference can look, textually, like Ruby code using duck typing, but that's about as far as I would ever go; on any real semantic level, the two are very different things.
- I've added a little more to the article and corrected the statement regarding type inference only being used in functional languages. However, there's a whole lot that needs to be fleshed out to make things clearer than they are now. In particular, it would be good to explain more thoroughly the role type inference plays in programming languages by providing reference to the process of program compilation by describing the process of creating an AST, annotating it through type inference, possibly performing type erasure, then converting to an IR for optimization, translation to the target platform, etc. For now, I just mentioned all of that as "compiler," and although it's not technically incorrect, and it's easier to understand without all the additional information, it's not strictly the most accurate account. 70.36.29.121 12:16, 17 December 2005 (UTC)
External link
[edit]There's a thread in the archive of the Haskell mailing list with contributions by Philip Wadler and others explaining the history of the Hindley-Milner algorithm, but I'm not sure if this is important enough to be incorporated into the article as long as the article doesn't explain the algorithm itself. The tread also contains detailed references to the original publications of Curry, Feys, Hindley, Milner and Damas. --Tobias Bergemann 08:12, 2004 Nov 16 (UTC)
what has operator overloading to do with this?
[edit]Since there are no ad-hoc polymorphic subfunctions in the function definition, we can declare the function to be parametric polymorphic.
I followed the links trying to understand this sentence but it didn't work. Could somebody explain it? --euyyn 22:52, 2 July 2006 (UTC)
- It is simply wrong, as is the current example even with the comment removed, because the length definition given is ad hoc overloaded, as in Haskell, 0, 1 and + are all affiliated not with Int, but with the overloaded type class Num (which includes Ints, unbounded Integers, floats, and even complex numbers). The standard length definition in Haskell is given an explicit type annotation to avoid this overloading. There is a genericLength function, but even that is explicitly restricted to a subclass, Integral, thus disallowing floating numbers.
- If we wish to avoid the complication of ad-hoc overloading in the example, but still retain correct Haskell, I think we need to avoid using numerical expressions at all. I will try to use the map function instead. An alternative might be to use a language without numerical overloading, such as Ocaml. --Ørjan 21:03, 23 November 2006 (UTC)
- Well, we have no Haskell lessons here in ULPGC.... I havent understood a word (nearly). --euyyn 17:24, 18 December 2006 (UTC)
- That was sort of the point: The Haskell type system is quite advanced/complicated, so to get an example of type inference in it that is simple enough for the article, we need to be a bit careful and avoid the type classes. --Ørjan 01:26, 20 December 2006 (UTC)
80.7.195.184 Yes but computer Reals are not Real Reals, Real Reals would have Length INFINITY, the Reals are rational and what does that make the complex but imagined?
Could we call this Run Time Binding or Real Time Binding?
Also is it because atomics are not objects that we can type inference?
i.e. An Object must be made of something, another object Or something which is part of an object (eg an atomic)
Something must be mass or else its energy? So 3 is energy, t=3 means t is an integer, but could become a simulated real, either way we don't mean t as in the letter between s & u, for that we would say t="t" or t='t'
so its syntax and how to chew it
too technical
[edit]Is it just me, does this article dive right into the deep end with technical jargon and mark-up? Perhaps at least some pointers to articles which might explain some of the notation as well as terminology would be in order...?
(P.S., isn't there a template to mark an article as too technical?) —The preceding unsigned comment was added by 71.70.210.188 (talk) 04:49, 11 December 2006 (UTC).
- 27-Oct-2007: The article was expanded with a non-technical explanation (on 22-Feb-2007), showing a simple coding example. In October 2007, I expanded the example to also show what won't work for type inference. I think that is enough simplification, because the detailed technical issues are numerous and deserve more space in the article to encourage a full treatment of the subject. We must balance a novice view, against the right of technical readers to present and read full details. I have removed tag "{{technical}}" and suggested writing a more detailed analysis if the article still seems troublesome. -Wikid77 18:49, 27 October 2007 (UTC)
bad description
[edit]This article states that "Type inference is a feature present in some strongly statically typed programming languages." However, python implements type inference but is dynamically typed. The opening statement should be clarified that type inference is not only a feature of "strongly statically typed programming languages." mbecker 13:57, 23 August 2007 (UTC)
Python definitely does not implement any sort of type inferencing. Python is interpreted, not compiled. Type inferencing is something you do at compile time to assert all objects are correctly typed. Dynamically typed languages like Python only do runtime type checking, and thus, type errors can only be caught when executing the code. Tac-Tics 07:13, 9 September 2007 (UTC)
- Nevertheless note that PyPy uses type inference for RPython source code which is a subset of the Python language. 77.8.158.253 (talk) 17:34, 20 April 2008 (UTC)
- That would make RPython a statically typed language, would it not? IMHO, doing type inference (and therefore compile-time type annotation) is, by definition a feature only of statically typed programming languages (by the definition of "statically typed" that types are annotated at compile-time). I'd just drop the word "strongly", as it is irrelevant to the discussion and there is disagreement over what "strong" means. —EatMyShortz (talk) 16:35, 19 August 2008 (UTC)
- RPython is exactly a statically-typed language, and is very restricted compared to Python: the restrictions are chosen to allow type inference, which would be impossible for full Python. However, the usage of an interpreter is totally irrelevant here. Python is compiled down to bytecode before execution, and it could be type-checked at that point, and then the bytecode is interpreted usually (or JITted with PyPy). OTOH Haskell, OcaML, Java, Scala feature type checking and they also have interpreters, which perform type checking, compile code down to bytecode, and then execute it. Haskell and OcaML also have fully-static compilers, but that's irrelevant here.
- What is true is that pure interpreters (say for BASIC, or for the shell) read the source code line-by-line instead of parsing it, so a program could stop during execution for a syntax error. Maybe that's even true for Ruby/Perl, where the main implementation has no parser; but Python is definitely analyzed before execution. --Blaisorblade (talk) 12:34, 6 May 2011 (UTC)
- I think what you're referring to is duck typing. — FatalError 07:22, 14 March 2009 (UTC)
c++ generic functions
[edit]is it possible to say that c++ implements static type inferencing for functions ? ie give given a templated method definition template< class T> void func( T &v) then depending on the use context (caller type of argument) an appropriate version of func will be inststantiated; what is it that distinguishses this sort of behavoir from ml or ocaml type infererncing. note i am not arguing for c++ inclusion as representative example in the article but am interested to know the distinction. —Preceding unsigned comment added by 213.56.155.247 (talk) 20:05, 5 July 2008 (UTC)
- No, that isn't type inference, just polymorphism/specialization. Type inference is where you don't specify the type, yet there is a "correct" type for that variable, and type inference works it out for you. (eg. it must be a string here — no other type will fit, type inference might infer that). Whereas templates in C++ is where you don't specify the type, indicating that all types are allowed there (there is no "correct" type). When the caller passes a specific typed argument and the appropriate version of the function is instantiated, that is called specialization. So the two are actually very different concepts. Type inference is just about figuring out types that the programmer was too lazy to write themselves :) —EatMyShortz (talk) 16:39, 19 August 2008 (UTC)
- I know that this was posted a really long time ago, but actually C++ templates have rather more to do with type inference than your comment here states. The whole argument-dependent lookup algorithm (or whatever it is called) really is a form of type inference. In fact, templates can and are used for exactly what you say -- "figuring out types that the programmer was too lazy to write themselves." For instance, the STL contains a make_pair function which takes two arguments of any type and returns a pair of those two values. However, this function is entirely redundant except in one respect, because the pair constructor itself performs the same task. The only reason it's provided is so that you don't have to explicitly specify the resulting type. EvanED (talk) 03:15, 6 April 2010 (UTC)
- That isn't type inference, which determines the types of expressions from the operations used to compose them. There is currently no type inference in C++. -- 98.108.202.144 (talk) 07:15, 14 September 2010 (UTC)
- I disagree. Type arguments are in fact inferred in C++ and Java, and at least for Java the specification talks about type inference - the definition of type inference you give describes in fact how type-checking determines 1+2 to have type Int. Yet, the allowed inference is severely limited, compared to what C++0x allows. System Fomega is a canonical example of lambda-calculus where type inference is impossible, and type arguments are allowed - and there not even the type arguments are inferred. --Blaisorblade (talk) 12:49, 6 May 2011 (UTC)
- That isn't type inference, which determines the types of expressions from the operations used to compose them. There is currently no type inference in C++. -- 98.108.202.144 (talk) 07:15, 14 September 2010 (UTC)
Milner independent of Hindley?
[edit]In 1978 Robin Milner, independently of Hindley's work, provided an equivalent algorithm
I think this tells the truth, but not the whole truth.
From Milner "A Theory of Type Polymorphism in Programming" (1978):
After doing this work we became aware of Hindley's method for deriving the "principal type scheme" (which is what we call a polymorphic type) for a term in combinatory logic. Hindley appears to have been the first to notice that the Unification Algorithm of Robinson is appropriate to this problem. Our work can be regarded as an extension of Hindley’s method to programming languages with local declarations, and as a semantic justification of the method.
Looks like it was independent, but immediately acknowledged and Milner prefers to see his work as a "semantic justification" of Hindley's method. —EatMyShortz (talk) 16:44, 19 August 2008 (UTC)
Implicit Typing
[edit]I removed the mention of implicit typing, since afaik that is a more general term. ie, type inference is a kind of implicit typing, but we would say "dynamic" languages like python use implicit typing but not type inference. Does this sound right? If not, we should probably point the redirect at implicit typing somewhere else. And if so, latent typing has problems. ErikHaugen (talk) 22:38, 23 August 2010 (UTC)
- You were definitely right to remove the mention of implicit typing in that context, because "implicit typing" is a property of programming languages, but "type inference" is the act of reconstructing type information – thus, though connected, those don't mean the same thing.
- Regarding the definiton of "implicit typing": I think there's no agreement in the literature; for example, Luca Cardelli uses the term "typing" only for statically typed languages, whereas he calls dynamic languages like Python "dynamically checked" (link). – Adrian Willenbücher (talk) 06:39, 27 August 2010 (UTC)
List of languages with type inference
[edit]The opening paragraph contains a sentence starting with "Some languages that include type inference are [...]", followed by a list of languages with type inference. Please note the word "some" – this is not supposed to be an exhaustive list, and I think only the most important languages ("important" in relation to type inference) ought to actually be listed there. Someone keeps adding Vala to the list; I would like to know why Vala is important enough to be on this list? – Adrian Willenbücher (talk) 10:19, 31 August 2010 (UTC)
Article Quality (Algorithm W)
[edit]The body of the section on Algorithm W (Hindley–Milner type inference algorithm) purporting to outline it, presents only an unrelated type checking (not type inference!) algorithm. This would be acceptable, if the section would be continued to present the real one. But since the section is unfinished, it harms more but it helps. Algorithm W uses unification in a very different manner. Right now, the section misleads completely.
If no expert is available to summarize Algorithm W, the quality of the article could be improved by removing everything from this section but the introduction. The section itself should be kept, since Hindley-Milner is the type inference method for languages dealing with functions and expressions.
In a way, removing the body of the section would help much to re-balance the article. IMO, it would be better to devote a separate article to Algorithm W and only mention it here.
- I'm probably sufficiently "expert" to give a crack at a reasonable Algorithm W explanation...I'm therefore working on a fairly comprehensive revision that I expect to finish within a couple of days. It's true that it's a bit long with a proper explanation of Algorithm W, but as the algorithm really is the type inference algorithm it's perhaps a bit redundant to separate the two. Clegoues (talk) 04:42, 5 February 2011 (UTC)
As today, the problem has not been dealt with.Clegoues, who promised to help above, did not yet made any modification beside adding a reference to a greek Wikipedia page. To reiterate, the section titled "Hindley–Milner type inference algorithm" has nothing to do with Hindley-Milner's, but something unrelated, perhaps some introduction to System F (which does type-checking and is undecidable), is summarized. The author of the section got it all wrong, so it is misleading at best. Please see
- Principal type-schemes for functional programs.
- A proof of correctness for the Hindley-Milner type inference algorithm
- A Theory of Type Polymorphism in Programming
for more technical details. In particular, HM is so outstanding because it does not need any type-annotation at all, as wrongly added in the claimed summary: . There, in the abstraction "" defines to be the type of the parameter . Additionally, a point of HM is LET-polymorphism, which is also not present in the purported summary. Variations and extensions aside the expression in HM's is .
A proper summary could be shaped along the following lines
- syntax of types, schemes, expression, environment
- type subsumption
- free variables in types, schemes, environments
- declarative rule system
- syntax driven rule system
- algorithm W
- extensions
The later would have to deal with recursive expressions and type functions.
I agree it is neither simple to understand nor to summarize HM's, but presenting something very different is not helpful in any way. Since the matter is not dealt with since 2009, i suggest to remove the section in question. Instead a reference to a new HM's and/or Algorithm W page could be added leaving space to describe the particular method. If one wants to summarize HM's briefly, only the syntax could be described, stated that HM finds the most general type and that it extends easily for recursion and type functions. -- 87.174.224.10 (talk) 19:52, 31 July 2011 (UTC)
ANNOUNCE: New Article on "Hindley-Milner"
[edit]The new article on Hindley-Milner is now on-line. This should close the above mentioned issue.
Please let me know about related concerns or suggestions.
-- Cobalt pen (talk) 17:04, 10 September 2011 (UTC)
Definition and examples are wrong
[edit]I believe this article is almost entirely wrong.
Type inference is a system whereby the types of variables, expressions and functions are computed from their use. A restricted form of this, where types of expressions are calculated from their subterms, it called type deduction.
Technically, deduction is a simple recursive descent, S-attributed or bottom up calculation which ALL compilers with static type systems can do. For example in C++98
int x = 1;
the compiler can deduce the type x should have from the initialising expression, and check it is compatible with the declared type. In C++11 you can write:
auto x = 1;
Now you don't need to state the type of the variable, but the type calculation is the same. There is no type inference in either case. Here is type inference in Ocaml:
let f x = x + 1
The compiler infers x is of type int, because the function + has been applied to it. This is inference not deduction, because the type is calculated from the way the variable is used later. And here:
let f x = g x in f 1
the argument to f is taken to be int because f is applied to an int later, and thus the argument of g must also be int. Again, this is type inference.
As a result of the wrong definition in this article a lot of languages are claiming to have inferencing type systems, including Rust and Go, which cannot do type inference. As a general guide any language with overloading is unlikely to also have any kind of inference, since the two features are very hard to mix. [Actually C# has overloading and a limited kind of inference, and there are research papers which demonstrate systems with both features and discuss the restrictions which arise as a result]
I think I must add the interesting note that in fact C++ can do type inference, not quite the kind you'd expect. C++ can infer the type of function template type parameters from function arguments. This calculation requires unification, and is not S-attributed, so technically it qualifies as type inference (despite the fact the Standard actually describes this as deduction :)
Skaller (talk) 15:25, 29 May 2013 (UTC)
Contradiction in text
[edit]In Type inference:
The opposite operation of type inference is called type erasure.
In Type erasure:
The opposite of type erasure is called reification.
And yet there is no mention of reification in Type inference, or mentioning of type inferencing in Reification (computer science). --Voidvector (talk) 05:05, 24 November 2014 (UTC)
Visual Basic type inference
[edit]Visual Basic 6.0 supported defining variables with the type "Any". However, I'm not sure if this means that Visual Basic 6.0 supports type inference. Does anyone have any ideas? Lcaa9 (talk) 10:43, 31 January 2016 (UTC)
- C-Class Computing articles
- Low-importance Computing articles
- All Computing articles
- C-Class Computer science articles
- Mid-importance Computer science articles
- WikiProject Computer science articles
- C-Class software articles
- Low-importance software articles
- C-Class software articles of Low-importance
- Unknown-importance Computing articles
- All Software articles
- Wikipedia pages with to-do lists