Recently I got the chance to do forecasting work for a major American broadcasting corporation. There were many lessons I learned from this work, so I will try to lay it down in an easily digestible (as opposed to chronological, and therefore confusing) way.
To start off with, I conjecture that there are roughly two ways to forecast: Time Series, and Parametric. Regardless of what the textbooks tell you, it is much more practical to first ask yourself
Can the series (that I am trying to forecast) be expressed as a function of another set of data points?Think polynomials, think regression. If yes, ask yourself
Do I have a dependable source for this other set of data points?If your answer to either of the questions was 'no', then Time Series is your only viable option. Whip out those Excel sheets and your favorite stats primer, and get cracking with the textbook approach.
However, if you both your answers were yes, life could still be interesting. Don't wait for me to tell you, go and collate the data from wherever it is right now and arrange it prettily on an Excel sheet, apply your favorite font and hold your breath... Now exhale, and download XLMiner. The trial version of course.
The tools you are looking at right now are Multiple Regression, Artificial Neural Networks and Auto-regression. So familiarize yourself with the theory from Wikipedia, make the donation because you appreciate the work Wikipedia does, and run MR, ANN and auto-regression on your data sets, one at a time. Fiddle with the parameters to your heart's content because
(1) No matter what you might have understood from the theory, you haven't understood the theory. Like how? For example, I assumed that by decreasing the step size for ANN, I will be able to stabilize a system faster. To my chagrin, larger step sizes actually speeded up the stabilization (fewer epochs were needed before error leveled off). Perhaps the system was chaotic, perhaps the step size was simply more optimal. But whatever it was, I am sure I haven't understood the theory so well that I can build the perfect model at one go. So keep trying, keep fiddling.Anyhow, we are getting ahead of ourselves. I promised to show learnings, and one other thing I learned was the importance of parametric modeling. So if anyone asks you why you built a parametric model, your answer might be something like this
(2) You can only learn more about your data set. Every new scenario you run has the potential to show you something about your data that you did not know till now, or perhaps could use in a later hypothesis.
While time series could suffice in many situations -- the motto of good analysts is always to look for best results and not get caught up in cool stunts -- time series heavily depends on historicals, and the degree to which the past will repeat itself is very uncertain. So rather than derive that perfect Trend- Cyclicality- Seasonality- Randomness, which might fall apart tomorrow due to some demand side, political or macroeconomic crunch, you can build a parametric model based on these demand side, political or macroeconomic variables and later run what-if scenarios for the client's viewing pleasure.All the same, if your Time Series forecast is bang on target in the test runs, you can go ahead with quoting these as your primary analysis and use the parametric models for that new and improved value add.