Text Size:

Thank you dear subscribers, we are overwhelmed with your response.

Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/

Historically, comparing statistics and analyzing data for any purpose was a manual, mostly a time-consuming exercise. Statistics, the original study of data, was also one of the specialized subjects in higher studies. Elaborate graphs were prepared for detailed analysis of painfully collected data. Those familiar with the stock markets would know about the candlestick patterns used to track stock prices. This tool for analysis of data was first used by Japanese rice traders hundreds of years ago before being popularized on the bourses in the USA. Starting in the 1970s, businesses began employing electronic technology to facilitate, accelerate, and automate the analytics process. Various technologies like relational databases, data warehouses, machine learning (ML) algorithms, web searching solutions, data visualization, data mining and other user-friendly tools became increasingly available on the office table.

Historical data is important to provide evidence in specific situations. Unfortunately, input data can be manipulated to achieve desired results. The key to accuracy of results is proper collection and integrity of the data; else the whole exercise is useless. The types of data, its measure including its type, and time and period of collection are some of the many variables which affect data gathering. Accessing a growing number of semistructured or unstructured data sources and identifying its quality , is not an easy task. As evidenced, in many cases, especially where human sentiments are involved in public arena, data collection is prone to errors.

With so much digital resources available and broadcast on the internet we have a plethora of experts and organisations coming out with analysis and predictions from weather to whatever. The computing facilities which speed up, unverified data collection and processes, spawning a cutting-edge technology of analytic is credited with the credibility of data integrity and accurate analysis. It is moot whether the analogue aspects of human feelings are built into this exercise. The media prominently put out the depressing “results” and “predictions” by such analytic on the masthead, while relegating the feel-good projections to the inside pages, as if promulgating good tidings is sacrilege. Bad news gets TRP and good news is dismissed as improbable. I am not implying that data analytics is all worthless, but only airing some caution.

Multiple studies have shown that new sources of data are often used to estimate human characteristics such as literacy, poverty, hunger and unemployment and such other conditions. In such studies, it is often seen that the findings vary depending on who, why, when and how of the exercise. Also, the high costs of implementing surveys are increasingly leading research teams to either cut back on surveys or to rely on somewhat unreliable administrative data, especially in relation to aspects directly relating to human sentiments like hunger. This raises controversies such as in the possible well-meaning publication, by the reputable “Concern Worldwide and Welthungerhilfe”, of a peer reviewed hunger index, a few months back, which had selective sample data and treated UAE and China similarly, in its Data analysis. The reference here is only to illustrate how controversial such well-meaning studies can become simply because of the complexity of data collection and lopsided preconceived notions.

In many cases, studies based on data should be seen along with personal experiences for better results. Two examples from our daily chores exemplify the approach. In the late 1960s there was no meteorology department predicting the weather every minute of the day. I remember my grandmother sniffing the air and announcing a change in the daily routine as she smelt rain and predicted a wet evening. She was seldom wrong. South Indian decoction coffee is also known as filter coffee because of its unique method of preparation. Each step of its making is precise and if any step is out of defined specifics, connoisseur’s will know it by smelling the final brew. The ideal coffee is made from freshly ground Coffee powder. The aroma stays with the seeds while stored while the powder on the shelf loses the original characteristics through passage of time, even hours. The correct proportion is something that is learned from experience and handed down through generations. With such variables ,it is a must that the process and the mix are perfect for that delectable tumbler of coffee. It is all about the feel and not the use of measures and timers. Connoisseurs will know of any deviation just by smelling the coffee. The same effect cannot be achieved with feeding data into automated Coffee dispensers!

Tailpiece: Make no mistake that, data analytic is surely delivering in many sectors and is a potent tool. W. Edwards Deming who pioneered the widely used sampling techniques in statistics, famously said, “In God we trust, all others must bring data.” He may not be widely off the mark. Yet, the reality is, in matters involving human sentiments, “smelling the coffee” may provide a better sense of where we stand.


Also read: SubscriberWrites: India’s ‘system’ has failed but illusion of leadership lingers


These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.