Snowfall Inspired video

Continuing my series of posts about making music inspired by videos/images I recorded previously based weather/climate events. This episode occurred when an unusual amount of snow felt during a short period of time this last winter. The therm unusual is used because this episode was unusual for the actual standards. The same amount of snow used to fall in the past and that was a normal event.

I called this video Snow – Spock’s finger as a joke to the album with the same name of the progressive rock band Spock’s Beard.

I Believe In Gnomes, Santa Claus And The Weather Man

It’s been a while since my last post. Basically the struggle of any artist, be happy or make money!

Anyway, I have been experimenting and making music inspired by videos/images I recorded previously based weather/climate events. It is more or less like a little side project that will help me when creating the videos of the climate change prog rock opera.

This video I made when a series of storms hit the place where I live.

The reason I called this song I Believe In Gnomes, Santa Claus And The Weather Man is because sometimes I have the feeling that people believe more in gnomes than in the forecast of the weather man. They are not that bad and they do a really good work most of the time. Weather forecast is hard!

Think Python AND R and not just PYTHON OR R: basic operators could generate different results


Nowadays, probably the two most used programming languages for machine learning are python and R. Both have advantages and disadvantages. With tools like rpy2 or Jupyter with the IRKernel, it is possible to integrate R and python into a single application and make them “talk” with each other. However, it is important to know how they work individually before the connection of these two programming languages. I will try to show some of the similarities and differences between the commands, functions and environment. For example, both languages could have very similar commands but these commands could lead to different results.

There are hundreds books about python and R with different flavours. Basic, advanced, applied, how to, free, paid, master, ninja, etc. Because I used a lot of programming applied to real world scenario I randomly biased decided to, use as a initial guide, a free book called A Hands-On Introduction to Using Python in the Atmospheric and Oceanic Sciences by prof. Johnny Lin (the main idea is to reproduce a machine learning portable code so i will change the reference later). Thus I will follow some examples of this book and give the insights about the python/R relation.

Nevertheless, It is imperative to know what version of the programming language one is using. Commands, types, syntax, can change over versions. Here I am using python 2.7.13 and R 3.2.2, both 64-bit on a Ubuntu 16.04.

Basic operators

In R, the elementary arithmetic operators are the usual +, -, *, / and ^ for raising to a power. The only difference to python is the exponentiation operator which is **.

Basic variables

Python and R are dynamically typed, meaning that variables take on the type of whatever they are set to when they are assigned. Additionally, the variable’s type can be changed (at run time) without changing the variable name. Lets start with two of the most important basic types: integer and float (called double in R). Here it is possible to see the first few differences between the languages. The integer type from the R documentation:

Integer vectors exist so that data can be passed to C or Fortran code which expects them, and so that (small) integer data can be represented exactly and compactly. Note that current implementations of R use 32-bit integers for integer vectors, so the range of representable integers is restricted to about +/-2*10^9: doubles can hold much larger integers exactly.

There are two integers types in python: plain and long.

Plain integers (also just called integers) are implemented using long in C, which gives them at least 32 bits of precision (sys.maxint is always set to the maximum plain integer value for the current platform, the minimum value is -sys.maxint – 1). Long integers have unlimited precision.

How about the float (or double) types? From both programming languages ultimately how double (or float) precision numbers are handled is down to the CPU/FPU and compiler (i.e. for the machine on which your program is running).

OK lets try a simple example. This example is the same as Example 4 on chapter 3 of our guide book. Lets say we have the following variables:

a = 3.5
b = -2.1
c = 3
d = 4

If we run the operators described above in python we have:

print(a*b) #case 1
print(a*d) #case 2
print(b+c) #case 3
print(a/c) #case 4
print(c/d) #case 5
## -7.35
## 14.0
## 0.9
## 1.16666666667
## 0

Repeating the same steps for R we obtain:

## [1] -7.35
## [1] 14
## [1] 0.9
## [1] 1.166667
## [1] 0.75

On cases 2, 4 and 5 we had different results. On case 4, the difference should be related to the float/double representation. Thus it is expected a difference on precision. However it does not mean that R has less precision than python in this example. It could be only the way R shows the variable to the user. Yes, unfortunately R can mislead you. For example, on case 2 the numbers are technically the but they are shown in a different way. The fact that R shows 14 instead of  14.0 does not mean that the value is integer and not double. Let’s use the functions is.integer() and is.double() to check the type of the variable of the result on case 2.

is.integer(a*d)
is.double(a*d)
## [1] FALSE
## [1] TRUE

Remember when the “dynamically typed”? The programming language automatically decides what type a variable is based on the value operation. Again from the python documentation:

Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where plain integer is narrower than long integer is narrower than floating point is narrower than complex. Comparisons between numbers of mixed type use the same rule.

That explains why in case 2 we have a float. You can check using the function isinstance().

print(isinstance( a*c, ( int, long ) ))
print(isinstance( a*c, ( float ) ))
## False
## True

How about case 5? Case 5 is a little bit more interesting. For python we are dealing with 2 integers, thus the result should be integer. That is why we have 0 because python does integer division and returns only the quotient.

print(isinstance( c/d, ( int, long ) ))
print(isinstance( c/d, ( float ) ))
## True
## False

How about R? Here is the reason (from the documentation):

For most purposes the user will not be concerned if the “numbers” in a numeric vector are integers, reals or even complex. Internally calculations are done as double precision real numbers, or double precision complex numbers if the input data are complex.

Thus, it is important to keep this in mind because you can have different results.

Similarities not so similar

As a final remark I’d like to mention about the similarities not so similar of the operator ^ in and python. Yes, python also has the same operator but it is the bitwise XOR  operator, which is different of exponentiation. Click here for more information about bitwise operators in python.

If you have any question, suggestion or opinion about this post please feel free to write a comment below.

Be careful what you wish for: Error measures for regression using R

Almost 10 years ago I was working with evolutionary strategies for tuning neural network for time series prediction when I became curious about error measures and the effects on the final forecast. In general, evolutionary algorithms use a fitness function that is based on a error measure. The objective is to get better individual(s) minimizing (or maximizing) the fitness function. Thus, to determine which model is “the best”, the performance of a trained model is evaluated against one or more criteria (e.g. error measure). However, the relation between the lowest error and “best model” is complex and should be applied according with the desirable goal (i.e. forecasting average, forecasting extremes, deviance measures, relative errors, etc.).

There is a journal paper which gives the description of the errors used in the ‘qualV’ package. This package has several implementations of quantitative validation methods. The paper (which is very interesting by the way), also has some examples of how the final results of the errors measures change when dealing with noise, shifts, nonlinear scaling, etc.

The objective of this post is just to show the problem, and raise the awareness when measuring the best model based on error only. Sometimes the minimization of one error measure does not guarantee the minimization of all other error measures and it even could lead to a pareto front. Here i am using some of the functions described in the paper and for simplicity i am comparing here only 4 errors measures: mean absolute error (MAE), root-mean-square error (RMSE), correlation coefficient (r) and mean absolute percentage error (MAPE). Each error measure is measuring a distinct characteristic of the time series and each of them has strong and weak points. I am using R version 3.3.2 (2016-10-31) on Ubuntu 16.04.

Case (i): Adding noise

Lets say we have the function with the original signal given by:
y=a\sin(\pi x b)+s.

Using x=[0,1], a=1.5, b=2, and s=0.75 then:

x = seq(0,1,by=.005)
ysignal = 1.5*sin(pi*2*x)+0.75
plot(x,ysignal,main = "Main signal")

figunnamed-chunk-1-1

We should change the original signal and check how this will affect the final result. If the “forecast” is the same as the signal then all the errors should be 0. Thus applying some noise to the signal s=(0.75+noise), where noise comes from a Gaussian function with mean=0 and standard deviation =0.2, and comparing with the original signal we get:

library(qualV)
n = 0.2 #noise level
noise = rnorm(length(x),sd=n)
ynoise = ysignal+noise
par(mfrow=c(1,2))
range.yy <- range(c(ysignal,ynoise))
plot(x,ysignal,type='l',main = "Adding noise"); lines(x,ynoise,col=2)
plot(ynoise,ysignal,ylim=range.yy,xlim=range.yy,main = "Signal vs Forecast") 

figunnamed-chunk-2-1

round(MAE(ysignal,ynoise),2)
## [1] 0.16
round(RMSE(ysignal,ynoise),2)
## [1] 0.21
round(cor(ysignal,ynoise),2)
## [1] 0.98
round(MAPE(ysignal,ynoise),2)
## [1] 40.65

Case (ii): Shifting the signal

Lets apply a shift on the values of the original signal. With s=0.95 we have:

yshift = ysignal+0.2

figunnamed-chunk-3-1

round(MAE(ysignal,yshift),2)
## [1] 0.2
round(RMSE(ysignal,yshift),2)
## [1] 0.2
round(cor(ysignal,yshift),2)
## [1] 1
round(MAPE(ysignal,yshift),2)
## [1] 60.95

Case (iii): shift + rescale

Lets apply a shift and also rescale the values of the original signal. Doing a=0.8 and s=0.95 we have:

yresshift = 0.8*ysignal+0.2

figunnamed-chunk-4-1

round(MAE(ysignal,yresshift),2)
## [1] 0.19
round(RMSE(ysignal,yresshift),2)
## [1] 0.22
round(cor(ysignal,yresshift),2)
## [1] 1
round(MAPE(ysignal,yresshift),2)
## [1] 61.66

Case (iv): Changing the frequency

In this case lets vary slightly the frequency of the original signal making b=2.11:

yfreq = 1.5*sin(pi*2.11*x)+0.75

figunnamed-chunk-5-1

round(MAE(ysignal,yfreq),2)
## [1] 0.17
round(RMSE(ysignal,yfreq),2)
## [1] 0.22
round(cor(ysignal,yfreq),2)
## [1] 0.98
round(MAPE(ysignal,yfreq),2)
## [1] 89.33

Each case has the original series (in black) and the possible “forecast” (in red). I also plotted the original series (signal) versus the residual series. Which case would you pick as the best forecast? What is your assumption?

The journey of music and knowledge

Since I started my progressive rock project I’ve been receiving great support. Thank you all. It’s been an amazing journey.  In this small post i will try to describe how’s been.

It’s been a pleasure to record the album for two reasons. First finally i have the opportunity to play my favourite musical genre, of course progressive rock. I am using crazy effects, creating different atmospheres with easy and hard parts, expressing myself as an artist, and creating an amazing story. Second, because of the readings I am doing, I’ve been learning so much about the world, climate, climate change and consequences. Oh boy, so many books and papers. So much to learn about how the world is interconnected.

The project is beautiful but it is not easy. There are lot of difficulties. As a musician, the first challenge after the songs are ready is the recording process, and it is not an easy task. Why? Mainly because of money. Recordings demand time and money. To do any recording, even the simplest one (with good quality), some minimal equipments are necessary. Also it is a lot of work. These are the two main reasons why professional musicians (and studio engineers) don’t like to play (work) for free. However, this is another topic lets get back to my process.

I’ve done some sessions before, so I have good equipment to record my bass. Therefore, almost all the recordings can be done in a home studio. In addition, it is cheaper than any professional studio, right? True, but the home studio won’t simply appear in my desk out of nowhere. That was my first hit. Even if I am able to record everything by myself (which mostly i can do anyway but some musicians friends will contribute), I still don’t have the whole equipment necessary to record the whole album. This is slowing the process a bit because I don’t have all the money necessary to buy everything at once. Therefore, I am not only recording the songs by parts but also buying the necessary equipment by parts (used and new).

This is only the first bump. It is certain that I will have more bumps during my journey which is part of the job. So far the songs are (in my rumble opinion) becoming awesome! My plan is to release the first song by December. Lets see if i can keep this deadline.