What Is Hungarian Notation and Should I Use It?
Hungarian notation is a naming convention that was popularized by Microsoft in the 1990s. The purpose of Hungarian notation is to encode information about the type or purpose of a variable into its name. Hungarian notation is named after its creator, Charles Simonyi, who is a Hungarian software engineer.
The primary benefit of Hungarian notation is that it provides the reader with some context about the variable. For example, if you see a variable named “iCount”, you can infer that it’s an integer type. Similarly, if you see a variable named “strName”, you can assume it’s a string type. This makes the code more readable since the reader doesn’t need to search for the variable definition to determine its type.
However, the downside of Hungarian notation is that it can make the code harder to read. The prefix adds extra characters to the variable names, which can make them longer and more cumbersome. Additionally, it can be challenging to remember all of the different prefixes and their meanings, especially if you’re working on a large codebase.
Another issue with Hungarian notation is that it’s not always accurate or consistent. For example, if a variable’s type is changed, its prefix may no longer be accurate. Or, if two developers use different prefixes for the same type of variable, it can lead to confusion and inconsistency in the code.
So, should you use Hungarian notation? Ultimately, it depends on personal preference and the coding standards of your organization. Some developers love Hungarian notation and swear by it, while others find it cumbersome and unnecessary. If you do decide to use Hungarian notation, it’s essential to be consistent and follow a well-defined set of conventions.