Israel’s 2015 election polls’ analysis with Shiny + ggplot2

(This is a guest post by my friend Yoni Sidi, a PhD candidate in statistics at the Hebrew University)


The Israeli elections are coming up this Tuesday, 17/3/2015 (i.e.: tomorrow!). They are a bit more complicated than your average US presidential race. The elections in Israel are based on nationwide proportional representation. The electoral threshold is 3.25% and the number of seats (or mandates) out of a total of 120 is proportional to the number of votes it recieves, so the threshold roughly translates to at least four mandates. The Israeli system is a multi-party system and is based on coalition governments. Multi-party is putting it mildly, there are 11 that have a chance (and are expected) to pass the mandate threshold.
There are two major parties, Hamachane Hazioni (Left Wing) and the Likud (Right Wing), that are hoping to garner between 16%-25% of the votes, 20-30 mandates. The main winners though are the medium size parties that recomend to the President who they think has the best chance to construct the next government, so yes there is a good possibility that the general elections winner will not be one constructing the coalition. Making the actual winners the parties that create the biggest coalition which exceeds 60 mandates.
An abundance of polling has been continually published during the run up and the variaety of pollsters and publishers is hard to keep track of as a casual voter trying to gauge the temperature of the political landscape. I came across a great realtime database by Project 61 on google docs of past and present polling result information and decided that it was a great opportunity to learn the Shiny library of RStudio and create an app that I can check current and past results. So after I figured out how to connect google docs to R, I created a self updating app that became a nice place to keep track of polling every day, check trends and distributions using interactive ggplot2 graphs and simulate coalition outcomes.
Please note that as of Friday (March 13th), until election day (March 17th), it is forbidden to perform new polls in Israel, hence the data presented here cannot allow for an up-to-date inference about the expected results of the election. This post is for educational purposes.

Running the election polls Shiny app on your computer

The github repo is available here.

#changing locale to run on Windows
if ([1] == "Windows") Sys.setlocale("LC_ALL","Hebrew_Israel.1255") 
#check to see if libraries need to be installed
libs <- c("shiny","shinyAce","httr","XML","stringr","ggplot2","scales","plyr","reshape2","dplyr")
x <- sapply(libs,function(x)if(!require(x,character.only = T)) install.packages(x))
#run App
#reset to original locale on Windows
if ([1] == "Windows") Sys.setlocale("LC_ALL")


Usage Instructions:

  1. Current Polling
  2. Election Analyis
  3. Mandate Simulator and Coalition Whiteboard
  4. Polling Database

Current Polling

  • The latest polling day results published in the media and the prediction made using the Project 61 weighting schemes. The parties are stacked into blocks to see which block has best chance to create a coalition.


The Project 61 prediction is based past pollster error deriving weights from the 2003,2006,2009 and 2013 elections, dependant on days to elections and parties. In their site there is an extensive analysis on pollster bias towards certain parties and party blocks.

Election Analysis

  • An interactive polling analysis layout where the user can filter elections, parties, publishers and pollster, dates and create different types of plots using any variable as the x and y axis.
  • The default layer is the 60 day trend (estimated with loess smoother) of mandates published by each pollster by party


The user can choose to include in the plots Elections (2003,2006,2009,2013,2015) and the subsequent filters are populated with the relevant parties, pollsters and publishers relevant to the chosen elections. Next there is a slider to choose the days before the election you want to view in the plot. This was used instead of a calendar to make a uniform timeline when comparing across elections.

In addition the plot itself is a ggplot thus the options above the graph give the user control on nearly all the options to build a plot. The user can choose from the following variables:

Time Party Results Poll
Election Party Mandates Publisher
DaysLeft Ideology (5 Party Blocks) Mandate.Group Pollster
Date Ideology.Group (2 Party Blocks) Results
year Attribute (Party History) (Pollster) Error

To define the following plot attributes:

Plot Type Axes Grouping Plot Facets
Point X axis variable Split Y by colors using a different variable Row Facet
Bar Discrete/Continuous Column Facet
Line Rotation of X tick labels
Step Y axis variable
  • Create Facets to display subsets of the data in different panels (two more variables to cut data) there are two type of facets to choose from
    • Wrap: Wrap 1d ribbon of panels into 2d
    • Grid: Layout panels in a grid (matrix)

An example of filtering pollsters to compare different tendencies for each party in the 2015 elections:


An example of comparing distribution mandates per party in the last two months of polling


An example of comparing distribution of pollster errors across elections (up to 10 days prior end of polling), by splitting the parties into five groups compared to previous election: old party,new party, combined (combination of two or more old parties), new.split (new party created from a split of a party from last election), old.split (old party that was a left from the split).



As we can see the pollster do not get a good indication of new,new.split or combined parties, which could be a problem this election since there are: 3 combined, 2 new splits.


  • If you are an R user and know ggplot there is an additional editor console,below the plot, where you can create advanced plots freehand, just add to the final object from the GUI called p and the data.frame is x, eg p+geom_point(). Just notice that all aesthetics must be given they are not defined in the original ggplot() definition. It is also possible to use any library you want just add it to the top of the code, the end object must be a ggplot.



#new layer
  • You can also remove the original layer if you want using the function remove_geom(ggplot_object,geom_layer), eg p=p+remove_geom(p,“point”) will remove the geom_point layer in the original graph


p=remove_geom(p,"point") #blank ggplot with facets in place
#new layer
  • Finally the plots can be viewed in English or Hebrew, and can be downloaded to you local computer using the download button.

Mandate Simulator and Coalition Whiteboard

  • A bootstrap simulation is run on Polling results from up to 10 of the latest polls using the sampling error as the uncertainty of each mandate published. Taking into account mandate surplus agreements using the Hagenbach-Bischoff quota method and the mandate threshold limit (in this election it is 4 mandates), calculating the simulated final tally of mandates. The distributions are plotted per party and the location of the median published results in the media.
  • The user can choose how many polls to take into account, up to last 10 polls, and how big a simulation they want to run: 50,100,500,1000 random polling results per each party and poll.


  • Once the simulator is complete you can create coalitions based on either the simulated distribution or actual published polls and see who can pass 60 mandates. Choose the coalition parties and the opposition parties from dropdown lists. (Yes the ones chosen are nonsensical on purpose…)


Polling Database

  • All raw data used in the application can be viewed and filtered in a datatable.

Creating good looking survival curves – the 'ggsurv' function

This is a guest post by Edwin Thoen

Currently I am doing my master thesis on multi-state models. Survival analysis was my favourite course in the masters program, partly because of the great survival package which is maintained by Terry Therneau. The only thing I am not so keen on are the default plots created by this package, by using plot.survfit. Although the plots are very easy to produce, they are not that attractive (as are most R default plots) and legends has to be added manually. I come across them all the time in the literature and wondered whether there was a better way to display survival. Since I was getting the grips of ggplot2 recently I decided to write my own function, with the same functionality as plot.survfitbut with a result that is much better looking. I stuck to the defaults of plot.survfit as much as possible, for instance by default plotting confidence intervals for single-stratum survival curves, but not for multi-stratum curves. Below you’ll find the code of the ggsurv function. Just as plot.survfit it only requires a fitted survival object to produce a default plot. We’ll use the lung data set from the survival package for illustration. First we load in the function to the console (see at the end of this post).

Once the function is loaded, we can get going, we use the lung data set from the survival package for illustration.

lung.surv < - survfit(Surv(time,status) ~ 1, data = lung)


Continue reading Creating good looking survival curves – the 'ggsurv' function

Comparison of ave, ddply and data.table

A guest post by Paul Hiemstra.

Fortran and C programmers often say that interpreted languages like R are nice and all, but lack in terms of speed. How fast something works in R greatly depends on how it is implemented, i.e. which packages/functions does one use. A prime example, which shows up regularly on the R-help list, is letting a vector grow as you perform an analysis. In pseudo-code this might look like:

dum = NULL
for(i in 1:100000) {
# new_outcome = some stuff...
dum = c(dum, new_outcome)

The problem here is that dum is continuously growing in size. This forces the operating system to allocate new memory space for the object, which is terribly slow. Preallocating dum to the length it is supposed to be greatly improves the performance. Alternatively, the use of apply type of functions, or functions from plyr package prevent these kinds of problems. But even between more advanced methods there are large differences between different implementations.

Take the next example. We create a dataset which has two columns, one column with values (e.g. amount of rainfall) and in the other a category (e.g. monitoring station id). We would like to know what the mean value is per category. One way is to use for loops, but I’ll skip that one for now. Three possibilities exist that I know of: ddply (plyr), ave (base R) and data.table. The piece of code at the end of this post compares these three methods. The outcome in terms of speed is:
(press the image to see a larger version)

 datsize noClasses tave tddply tdata.table
1 1e+05 10 0.091 0.035 0.011
2 1e+05 50 0.102 0.050 0.012
3 1e+05 100 0.105 0.065 0.012
4 1e+05 200 0.109 0.101 0.010
5 1e+05 500 0.113 0.248 0.012
6 1e+05 1000 0.123 0.438 0.012
7 1e+05 2500 0.146 0.956 0.013
8 1e+05 10000 0.251 3.525 0.020
9 1e+06 10 0.905 0.393 0.101
10 1e+06 50 1.003 0.473 0.100
11 1e+06 100 1.036 0.579 0.105
12 1e+06 200 1.052 0.826 0.106
13 1e+06 500 1.079 1.508 0.109
14 1e+06 1000 1.092 2.652 0.111
15 1e+06 2500 1.167 6.051 0.117
16 1e+06 10000 1.338 23.224 0.132

It is quite obvious that ddply performs very bad when the number of unique categories is large. The ave function performs better. However, the data.table option is by far the best one, outperforming both other alternatives easily. In response to this, Hadley Wickham (author of plyr) responded:

This is a drawback of the way that ddply always works with data frames. It will be a bit faster if you use summarise instead of data.frame (because data.frame is very slow), but I’m still thinking about how to overcome this fundamental limitation of the ddply approach.

I hope this comparison is of use to readers. And remember, think before complaining that R is slow :) .

Paul (e-mail: [email protected])

ps This blogpost is based on discussions on the R-help and manipulatr mailing lists:[email protected]/msg142797.html

R code to perform the comparison

datsize = c(10e4, 10e5)
noClasses = c(10, 50, 100, 200, 500, 1000, 2500, 10e3)
comb = expand.grid(datsize = datsize, noClasses = noClasses)
res = ddply(comb, .(datsize, noClasses), function(x) {
expdata = data.frame(value = runif(x$datsize),
cat = round(runif(x$datsize, min = 0, max = x$noClasses)))
expdataDT = data.table(expdata)
t1 = system.time(res1 <- with(expdata, ave(value, cat)))
t2 = system.time(res2 <- ddply(expdata, .(cat), mean))
t3 = system.time(res3 <- expdataDT[, sum(value), by = cat])
return(data.frame(tave = t1[3], tddply = t2[3], tdata.table = t3[3]))
}, .progress = 'text')
ggplot(aes(x = noClasses, y = log(value), color = variable), data =
melt(res, id.vars = c("datsize","noClasses"))) + facet_wrap(~ datsize)
+ geom_line()

Engineering Data Analysis (with R and ggplot2) – a Google Tech Talk given by Hadley Wickham

It appears that just days ago, Google Tech Talk released a new, one hour long, video of a presentation (from June 6, 2011) made by one of R’s community more influential contributors, Hadley Wickham.

This seems to be one of the better talks to send a programmer friend who is interested in getting into R.

Talk abstract

Data analysis, the process of converting data into knowledge, insight and understanding, is a critical part of statistics, but there’s surprisingly little research on it. In this talk I’ll introduce some of my recent work, including a model of data analysis. I’m a passionate advocate of programming that data analysis should be carried out using a programming language, and I’ll justify this by discussing some of the requirement of good data analysis (reproducibility, automation and communication). With these in mind, I’ll introduce you to a powerful set of tools for better understanding data: the statistical programming language R, and the ggplot2 domain specific language (DSL) for visualisation.

The video

More resources

Rose plot using Deducers ggplot2 plot builder

The (excellent!) LearnR blog had a post today about making a rose plot in

Following today’s announcement, by Ian Fellows, regarding the release of the new version of Deducer (0.4) offering a strong support for ggplot2 using a GUI plot builder, Ian also sent an e-mail where he shows how to create a rose plot using the new ggplot2 GUI included in the latest version of Deducer. After the template is made, the plot can be generated with 4 clicks of the mouse.

Here is a video tutorial (Ian published) to show how this can be used:

The generated template file is available at:

I am excited about the work Ian is doing, and hope to see more people publish use cases with Deducer.

ggplot2 plot builder is now on CRAN! (through Deducer 0.4 GUI for R)

Ian fellows, a hard working contributer to the R community (and a cool guy), has announced today the release of Deducer (0.4) to CRAN (scheduled to update in the next day or so).
This major update also includes the release of a new plug-in package (DeducerExtras), containing additional dialogs and functionality.

Following is the e-mail he sent out with all the details and demo videos.

Continue reading ggplot2 plot builder is now on CRAN! (through Deducer 0.4 GUI for R)

New versions for ggplot2 (0.8.8) and plyr (1.0) were released today

As prolific as the CRAN website is of packages, there are several packages to R that succeeds in standing out for their wide spread use (and quality), Hadley Wickhams ggplot2 and plyr are two such packages.
plyr image
And today (through twitter) Hadley has updates the rest of us with the news:

just released new versions of plyr and ggplot2. source versions available on cran, compiled will follow soon #rstats

Going to the CRAN website shows that plyr has gone through the most major update, with the last update (before the current one) taking place on 2009-06-23. And now, over a year later, we are presented with plyr version 1, which includes New functions, New features some Bug fixes and a much anticipated Speed improvements.
ggplot2, has made a tiny leap from version 0.8.7 to 0.8.8, and was previously last updated on 2010-03-03.

Me, and I am sure many R users are very thankful for the amazing work that Hadley Wickham is doing (both on his code, and with helping other useRs on the help lists). So Hadley, thank you!

Here is the complete change-log list for both packages:
Continue reading New versions for ggplot2 (0.8.8) and plyr (1.0) were released today