Post hoc analysis for Friedman's Test (R code)

My goal in this post is to give an overview of Friedman’s Test and then offer R code to perform post hoc analysis on Friedman’s Test results. (The R function can be downloaded from here)

Preface: What is Friedman’s Test

Friedman test is a non-parametric randomized block analysis of variance. Which is to say it is a non-parametric version of a one way ANOVA with repeated measures. That means that while a simple ANOVA test requires the assumptions of a normal distribution and equal variances (of the residuals), the Friedman test is free from those restriction. The price of this parametric freedom is the loss of power (of Friedman’s test compared to the parametric ANOVa versions).

The hypotheses for the comparison across repeated measures are:

  • H0: The distributions (whatever they are) are the same across repeated measures
  • H1: The distributions across repeated measures are different

The test statistic for the Friedman’s test is a Chi-square with [(number of repeated measures)-1] degrees of freedom. A detailed explanation of the method for computing the Friedman test is available on Wikipedia.

Performing Friedman’s Test in R is very simple, and is by using the “friedman.test” command.

Post hoc analysis for the Friedman’s Test

Assuming you performed Friedman’s Test and found a significant P value, that means that some of the groups in your data have different distribution from one another, but you don’t (yet) know which. Therefor, our next step will be to try and find out which pairs of our groups are significantly different then each other. But when we have N groups, checking all of their pairs will be to perform [n over 2] comparisons, thus the need to correct for multiple comparisons arise.
The tasks:
Our first task will be to perform a post hoc analysis of our results (using R) – in the hope of finding out which of our groups are responsible that we found that the null hypothesis was rejected. While in the simple case of ANOVA, an R command is readily available (“TukeyHSD”), in the case of friedman’s test (until now) the code to perform the post hoc test was not as easily accessible.
Our second task will be to visualize our results. While in the case of simple ANOVA, a boxplot of each group is sufficient, in the case of a repeated measures – a boxplot approach will be misleading to the viewer. Instead, we will offer two plots: one of parallel coordinates, and the other will be boxplots of the differences between all pairs of groups (in this respect, the post hoc analysis can be thought of as performing paired wilcox.test with correction for multiplicity).

R code for Post hoc analysis for the Friedman’s Test

The analysis will be performed using the function (I wrote) called “”, based on the packages “coin” and “multcomp”. Just a few words about it’s arguments:

  • formu – is a formula object of the shape: Y ~ X | block (where Y is the ordered (numeric) responce, X is a group indicator (factor), and block is the block (or subject) indicator (factor)
  • data – is a data frame with columns of Y, X and block (the names could be different, of course, as long as the formula given in “formu” represent that)
  • All the other parameters are to allow or suppress plotting of the results.
147 <- function(formu, data, to.print.friedman = T, = T,  to.plot.parallel = T, to.plot.boxplot = T, signif.P = .05, = T, =F)
	# formu is a formula of the shape: 	Y ~ X | block
	# data is a long data.frame with three columns:    [[ Y (numeric), X (factor), block (factor) ]]
	# Note: This function doesn't handle NA's! In case of NA in Y in one of the blocks, then that entire block should be removed.
	# Loading needed packages
		print("You are missing the package 'coin', we will now try to install it...")
		print("You are missing the package 'multcomp', we will now try to install it...")
		print("You are missing the package 'colorspace', we will now try to install it...")
	# get the names out of the formula
	formu.names <- all.vars(formu) <- formu.names[1] <- formu.names[2] <- formu.names[3]
	if(dim(data)[2] >3) data <- data[,c(,,]	# In case we have a "data" data frame with more then the three columns we need. This code will clean it from them...
	# Note: the function doesn't handle NA's. In case of NA in one of the block T outcomes, that entire block should be removed.
	# stopping in case there is NA in the Y vector
	if(sum([,])) > 0) stop("Function stopped: This function doesn't handle NA's. In case of NA in Y in one of the blocks, then that entire block should be removed.")
	# make sure that the number of factors goes with the actual values present in the data:
	data[, ] <- factor(data[, ])
	data[, ] <- factor(data[, ])
	number.of.X.levels <- length(levels(data[, ]))
	if(number.of.X.levels == 2) { warning(paste("'",,"'", "has only two levels. Consider using paired wilcox.test instead of friedman test"))}
	# making the object that will hold the friedman test and the other.
	the.sym.test <- symmetry_test(formu, data = data,	### all pairwise comparisons
						   teststat = "max",
						   xtrafo = function( { trafo(, factor_trafo = function(x) { model.matrix(~ x - 1) %*% t(contrMat(table(x), "Tukey")) } ) },
						   ytrafo = function({ trafo(, numeric_trafo = rank, block = data[,] ) }
	# if(to.print.friedman) { print(the.sym.test) }
			if(pvalue(the.sym.test) < signif.P)
				# the post hoc test <- pvalue(the.sym.test, method = "single-step")	# this is the post hoc of the friedman test
				# plotting
				if(to.plot.parallel & to.plot.boxplot)	par(mfrow = c(1,2)) # if we are plotting two plots, let's make sure we'll be able to see both
					X.names <- levels(data[,])
					X.for.plot <- seq_along(X.names)
					plot.xlim <- c(.7 , length(X.for.plot)+.3)	# adding some spacing from both sides of the plot
						blocks.col <- rainbow_hcl(length(levels(data[,])))
					} else {
						blocks.col <- 1 # black
					data2 <- data
					if( {
						data2[,] <- jitter(data2[,])
						par.cor.plot.text <- "Parallel coordinates plot (with Jitter)"
					} else {
						par.cor.plot.text <- "Parallel coordinates plot"
					# adding a Parallel coordinates plot
									 direction="wide")[,-1])  ,
							type = "l",  lty = 1, axes = FALSE, ylab =,
							xlim = plot.xlim,
							col = blocks.col,
							main = par.cor.plot.text)
					axis(1, at = X.for.plot , labels = X.names) # plot X axis
					axis(2) # plot Y axis
					points(tapply(data[,], data[,], median) ~ X.for.plot, col = "red",pch = 4, cex = 2, lwd = 5)
					# first we create a function to create a new Y, by substracting different combinations of X levels from each other.
					subtract.a.from.b <- function(a.b ,
					{[,a.b[2]] -[,a.b[1]]
					temp.wide <- reshape(data,,,
									 direction="wide") 	#[,-1] <- as.matrix(t(temp.wide[,-1]))
					colnames( <- temp.wide[,1]
					Y.b.minus.a.combos <- apply(with(data,combn(levels(data[,]), 2)), 2, subtract.a.from.b,
					names.b.minus.a.combos <- apply(with(data,combn(levels(data[,]), 2)), 2, function(a.b) {paste(a.b[2],a.b[1],sep=" - ")})
					the.ylim <- range(Y.b.minus.a.combos)
					the.ylim[2] <- the.ylim[2] + max(sd(Y.b.minus.a.combos))	# adding some space for the labels
					is.signif.color <- ifelse( < .05 , "green", "grey")
						names = names.b.minus.a.combos ,
						col = is.signif.color,
						main = "Boxplots (of the differences)",
						ylim = the.ylim
					legend("topright", legend = paste(names.b.minus.a.combos, rep(" ; PostHoc P.value:", number.of.X.levels),round(,5)) , fill =  is.signif.color )
					abline(h = 0, col = "blue")
				} <- list(Friedman.Test = the.sym.test, PostHoc.Test =
				if(to.print.friedman) {print(}
			}	else {
					print("The results where not significant, There is no need for a post hoc test")
# Original credit (for linking online, to the package that performs the post hoc test) goes to "David Winsemius", see:


(The code for the example is given at the end of the post)

Let’s make up a little story: let’s say we have three types of wine (A, B and C), and we would like to know which one is the best one (in a scale of 1 to 7). We asked 22 friends to taste each of the three wines (in a blind fold fashion), and then to give a grade of 1 till 7 (for example sake, let’s say we asked them to rate the wines 5 times each, and then averaged their results to give a number for a persons preference for each wine. This number which is now an average of several numbers, will not necessarily be an integer).

After getting the results, we started by performing a simple boxplot of the ratings each wine got. Here it is:

The plot shows us two things: 1) that the assumption of equal variances here might not hold. 2) That if we are to ignore the “within subjects” data that we have, we have no chance of finding any difference between the wines.

So we move to using the function “” on our data, and we get the following output:

Asymptotic General Independence Test
data:  Taste by
Wine (Wine A, Wine B, Wine C)
stratified by Taster
maxT = 3.2404, p-value = 0.003421
Wine B – Wine A 0.623935139
Wine C – Wine A 0.003325929
Wine C – Wine B 0.053772757

The conclusion is that once we take into account the within subject variable, we discover that there is a significant difference between our three wines (significant P value of about  0.0034). And the posthoc analysis shows us that the difference is due to the difference in tastes between Wine C and Wine A (P value 0.003). and maybe also with the difference between Wine C and Wine B (the P value is 0.053, which is just borderline significant).

Plotting our analysis will also show us the direction of the results, and the connected answers of each of our friends answers:

Here is the code for the example:

source("")  # loading the function from the internet
	### Comparison of three Wine ("Wine A", "Wine B", and
	###  "Wine C") for rounding first base.
	WineTasting <- data.frame(
		  Taste = c(5.40, 5.50, 5.55,
					5.85, 5.70, 5.75,
					5.20, 5.60, 5.50,
					5.55, 5.50, 5.40,
					5.90, 5.85, 5.70,
					5.45, 5.55, 5.60,
					5.40, 5.40, 5.35,
					5.45, 5.50, 5.35,
					5.25, 5.15, 5.00,
					5.85, 5.80, 5.70,
					5.25, 5.20, 5.10,
					5.65, 5.55, 5.45,
					5.60, 5.35, 5.45,
					5.05, 5.00, 4.95,
					5.50, 5.50, 5.40,
					5.45, 5.55, 5.50,
					5.55, 5.55, 5.35,
					5.45, 5.50, 5.55,
					5.50, 5.45, 5.25,
					5.65, 5.60, 5.40,
					5.70, 5.65, 5.55,
					6.30, 6.30, 6.25),
					Wine = factor(rep(c("Wine A", "Wine B", "Wine C"), 22)),
					Taster = factor(rep(1:22, rep(3, 22))))
	with(WineTasting , boxplot( Taste  ~ Wine )) # boxploting ~ Wine | Taster ,WineTasting)	# the same with our function. With post hoc, and cool plots

If you find this code useful, please let me know (in the comments) so I will know there is a point in publishing more such code snippets…

Is it harder to advertise to the more educated? Correlation in US States data will not be enough to answer us…

“Chitika research” published today a fun small dataset (you can download it from here) in a post titled “The Educated are Harder to Advertise To”.

In this post I have three goals in mind:

  1. Suggesting another plot instead of the one used in the original post.
  2. Emphasizing the “Correlation does not imply causation” rule.
  3. Inviting other R lovers (as myself) to find fun things to do with this (and similar) dataset.

The Data

The data set is comprised of 51 rows, one for each US states with the two variables (columns):

  • CTR – The CTR means “Click Through Rate” and is from chitika data base and collected from over two random days in January (a total of 31,667,158 total impressions), and is from the full range of Internet users (they don’t have traditional demographic data – every impression is completely anonymous).
  • Percent of the population who graduated college.

Super basic analysis and plot

This data presents a stunning -0.63 correlation between the two measurements. Hinting that “The Educated are Harder to Advertise To” (as the original post suggested). The data can be easily visualized using a scatter plot:

Created using just a few lines of R code:

aa <- read.table("", sep = "t", header = T)
aa[,2:3] <- aa[,2:3] * 100
plot(aa[,2] ~ aa[,3], sub = paste("Correlation: ", round(cor(aa[,2], aa[,3]), 2)),
	main = "Scatter plot of %CTR VS %College_Grad per State",
	xlab = "%College_Grad per State",
	ylab = "%CTR per State"
abline(lm(aa[,2] ~ aa[,3]), col = "blue")

My conclusion from the analysis

I was asked in the comments (by Eyal) to add my own conclusions to the analysis. Does higher intelligence imply lower chances of clicking ads, my answer (under the present data) is simple “I don’t know”. The only real conclusion I can make of the data is that there might be a point in checking this effect in a more rigorous way (which I am sure is already being done).

What should we have done in order to know? When doing scientific research, we often ask ourselves how sure are we of our results. The rule of thumb for this type of question is called “the pyramid of evidence“. It is a way to organize various ways of getting “information” about the world, in an hierarchy of reliability. Here is a picture of this pyramid:

(Credit: image source)

We can see that the most reliable source is a systematic review of randomized controlled trials. In our case, that would mean having controlled experiments where you take groups of people with different levels of “intelligence” (how would you measure that?), and check their CTR (click through rates) on banner ads. This should be done in various ways, correcting for various confounders , and later the results and conclusions (from several such experiments) should be systematically reviewed by experts on the subject.

All of this should be done in order to make a real assessment of the underlying question – how does smarts effects banner clicking.
And the reason we need all of this work is because of what is said in the title of the next section:

Correlation does not imply causation

As is written in the article on wikipedia:

“Correlation does not imply causation” is a phrase used in science and statistics to emphasize that correlation between two variables does not automatically imply that one causes the other (though it does not remove the fact that correlation can still be a hint, whether powerful or otherwise). The opposite belief, correlation proves causation, is a logical fallacy by which two events that occur together are claimed to have a cause-and-effect relationship.

But a much clearer explenation of it was given by the following XKCD comic strip:
Correlation on XKCD

Next step: other resources to play with

The motivation for my post is based on this digg post trying to hint how Religiousness is connected to “negative” things such as crimes, poverty and so on. That post was based on the following links:


If someone is motivated, he/she can extract that data and combine it with the current provided data.

In conclusion: this simplistic dataset, combined with other data resources, provides opportunity for various fun demonstrations of pairs correlation plots and of nice spatial plots (of states colored by their matching variable). It is a good opportunity to emphasize (to students, friends and the like) that “Correlation does not imply causation!”.
And finally – If you are an R lover/blogger and feel like playing with this – please let me know 🙂 .

R Web Application – "Hello World" using RApache (~7min video tutorial)

I just noticed a google buzz from Jeroen ooms, with a Youtube video titled “RApache Hello World + POST arguments + catching errors.

In this ~7 min video tutorial, Jeroen shares with us:

  1. How to write “Hello World” in a website using RApache.
  2. How to extract arguments from a form submited by the website visitor (and then inserting it into an “rnorm” function so to control the output). And finally,
  3. How to catch an error in case of an invalid argument on an R Web Application.

Thank you Jeroen for a very simple, step by step, tutorial:

p.s: For more videos by Jeroen, have a look at

Highlight the R syntax on your (WordPress) blog using the wp-syntax plugin

Update (11.10.10): I found a better solution for R syntax highlighting then the one presented in this post. The plugin is called WP-CodeBox, and I wrote about it on the post – WP-CodeBox: A better R syntax highlighter plugin for WordPress
Download link for WP-Syntax plugin (with GeSHi version

In case you have a self hosted WordPress blog, and you wish to show your R code in it, how would you do it?

The simplest solution would be to just paste the code as plain text, which will look like this:

x <- rnorm(100, mean = 2, sd = 3)
plot(x, xlab = “index”, main = “Example code”)

But if you would like to help our readers orient themselves inside your code by giving different colors to different commands in the code (a.k.a: syntax highlighting). So it would like something like this:

x <- rnorm(100, mean = 2, sd = 3) # Creating a vector
plot(x, xlab = "index", main = "Example code") # Plotting it

How then would you do it?

Plugin Installation

The easiest way to do this inside a self hosted WordPress blog is by installing a plugin called WP-Syntax:

WP-Syntax provides clean syntax highlighting using GeSHi — supporting a wide range of popular languages (including R). It supports highlighting with or without line numbers and maintains formatting while copying snippets of code from the browser.

But there is a problem. The current WP-Syntax version is using an old version of GeSHi, and only the newer version (currently GeSHi version includes support for R syntax. In order to solve this I patched the plugin and I encourage you to download (the fixed version of) WP-Syntax from here, which will allow you to highlight your R code.


After installing (and activating) the plugin, in order to add R code to your post you will need to:
1) Only work in HTML mode (not the Visual mode). Or else, the code you will paste will be messed up.
2) Put your code between the <pre> tag, like this:

(Note: make sure that you rewrite the ” – so it will work.)

<pre lang=”rsplus” line=”1″>
…Your R code here…

Final note: R Syntax highlight in other ways

If you wish to have R syntax higlight inside an HTML file, I encourage you can have a look at the highlight package, by Romain Francois.

If you want to higlight your R syntax inside, here is a blog post by Erik Iverson showing how to do that using Emacs.

p.s: If you have a blog in which you write about R, please let me know about it in the comments (Or just join – I’d love to follow you 🙂

Update: Stephen Turner wrote about a syntax highlighting solution for R and blogger using github gist. And also mentioned there another solution for self hosted wordpress blogs, via J.D. Long: a Github Gist plugin for WordPress. Go publish code 🙂

Barnard's exact test – a powerful alternative for Fisher's exact test (implemented in R)

(The R code for Barnard’s exact test is at the end of the article, and you could also just download it from here, or from github)

Barnards exact test - p-value based on the nuisance parameter
Barnards exact test - p-value based on the nuisance parameter

About Barnard’s exact test

About half a year ago, I was studying various statistical methods to employ on contingency tables. I came across a promising method for 2×2 contingency tables called “Barnard’s exact test“. Barnard’s test is a non-parametric alternative to Fisher’s exact test which can be more powerful (for 2×2 tables) but is also more time-consuming to compute (References can be found in the Wikipedia article on the subject).

The test was first published by George Alfred Barnard (1945) (link to the original paper in Nature). Mehta and Senchaudhuri (2003) explain why Barnard’s test can be more powerful than Fisher’s under certain conditions:

When comparing Fisher’s and Barnard’s exact tests, the loss of power due to the greater discreteness of the Fisher statistic is somewhat offset by the requirement that Barnard’s exact test must maximize over all possible p-values, by choice of the nuisance parameter, π. For 2 × 2 tables the loss of power due to the discreteness dominates over the loss of power due to the maximization, resulting in greater power for Barnard’s exact test. But as the number of rows and columns of the observed table increase, the maximizing factor will tend to dominate, and Fisher’s exact test will achieve greater power than Barnard’s.

About the R implementation of Barnard’s exact test

After finding about Barnard’s test I was sad to discover that (at the time) there had been no R implementation of it. But last week, I received a surprising e-mail with good news. The sender, Peter Calhoun, currently a graduate student at the University of Florida, had implemented the algorithm in R. Peter had  found my posting on the R mailing list (from almost half a year ago) and was so kind as to share with me (and the rest of the R community) his R code for computing Barnard’s exact test. Here is some of what Peter wrote to me about his code:

On a side note, I believe there are more efficient codes than this one.  For example, I’ve seen codes in Matlab that run faster and display nicer-looking graphs.  However, this code will still provide accurate results and a plot that gives the p-value based on the nuisance parameter.  I did not come up with the idea of this code, I simply translated Matlab code into R, occasionally using different methods to get the same result.  The code was translated from:

Trujillo-Ortiz, A., R. Hernandez-Walls, A. Castro-Perez, L. Rodriguez-Cardozo. Probability Test.  A MATLAB file. URL

My goal was to make this test accessible to everyone.  Although there are many ways to run this test through Matlab, I hadn’t seen any code to implement this test in R.  I hope it is useful for you, and if you have any questions or ways to improve this code, please contact me at [email protected]

Continue reading “Barnard's exact test – a powerful alternative for Fisher's exact test (implemented in R)”

Web Development with R – an HD video tutorial of Jeroen Ooms talk

Here is a HD version of a video tutorial on web development with R, a lecture that was given by Jeroen Ooms (the guy who made A web application for R’s ggplot2). This talk was given at the Bay Area UseR Group meeting on R-Powered Web Apps.

You can also view the slides for his talk and view (great) examples for: stockplotlme4, and gpplot2.

Thanks again to Jeroen for sharing his knowledge and experience!

Statistics plugins for WordPress

Today I came across a post named “24 Noble WordPress Plugins To Determine The Performance of your Blog” through Weblog Tools Collection (one of my favorite places to stay updates on wordpress). The post provided a good solid list of statistics plugins for wordpress. Some of them are too old to count (pun intended), others are much more recent and relevant.

As a statistics (and WordPress) lover myself, I was inspired to extend the list of wordpress statistics plugins for the hope of benefiting the community:
Blog Metrics
This plugin is based on ideas in an excellent post by Avinash Kaushik (Whom I consider a Web analytics guru and a brilliant blogger!).

it calculates:

  • Raw Author Contribution:
    • average number of posts per month
    • average number of words per post
  • Conversation Rate:
    • average number of comments per postwithout your own comments
    • average number of words used in comments to posts

Both for all the time you’ve been blogging, and for the last month, it then adds these values in a page on your WordPress dashboard.

Blog Metrics for a single author blogBlog metrics per author

Search Meter

This plugin is a must for any blogger. Period.

If you have a Search box on your blog, Search Meter automatically records what people are searching for — and whether they are finding what they are looking for. Search Meter’s admin interface shows you what people have been searching for in the last couple of days, and in the last week or month. It also shows you which searches have been unsuccessful. If people search your blog and get no results, they’ll probably go elsewhere. With Search Meter, you’ll be able to find out what people are searching for, and give them what they want by creating new posts on those topics.  […]

Google analytics Dashboard

Google Analytics Dashboard gives you the ability to view your Google Analytics data in your WordPress dashboard. You can also alow other users to see the same dashboard information when they are logged in or embed parts of the data into posts or as part of your theme.

The biggest advantage of this plugin in my view is that it adds sparklines in the “posts -> edit” page in the admin area.

I don’t use this one much. But one feature it has that I find interesting is that is adds information of when you posted something with the trend line of the google analytics traffic data. It also mixes data from MailChimp’s, which I don’t use.

MailChimp’s Analytics360 plugin allows you to pull Google Analytics and MailChimp data directly into your dashboard, so you can access robust analytics tools without leaving WordPress.

Broken Link Checker
This plugin is also a must.

This plugin will monitor your blog looking for broken links and let you know if any are found.

  • Monitors links in your posts, pages, the blogroll, and custom fields (optional).
  • Detects links that don’t work and missing images.
  • Notifies you on the Dashboard if any are found.
  • Also detects redirected links.
  • Makes broken links display differently in posts (optional).
  • Link checking intervals can be configured.
  • New/modified posts are checked ASAP.
  • You view broken links, redirects, and a complete list of links used on your site, in the Tools -> Broken Links tab.
  • Searching and filtering links by URL, anchor text and so on is also possible.
  • Each link can be edited or unlinked directly via the plugin’s page, without manually editing each post.

Piwik + WP-Piwik

This plugin adds a Piwik stats site to your WordPress dashboard. It’s also able to add the Piwik tracking code to your blog.
Piwik is an open source (GPL licensed) web analytics software program. It provides you with detailed real time reports on your website visitors: the search engines and keywords they used, the language they speak, your popular pages and so on…

You can install Piwik more or less like you install WordPress, and then you are left to integrate it into your blog. The only real down side of it for me (compared to google analytics) is the advanced segmentation and pivoting. But in general it is a free, great (and growing!) Web analytics solution.

Woopra Analytics Plugin
I have been using Woopra since their release thanks to lorelle. I enjoy the ability to follow the live actions that are happening inside the blog. Although since woopra went from BETA to GOLD, I lost most interest because the total blogs I track have more traffic volume then woopra allow tracking in their free account. But small bloggers could find the service gratifying.

Woopra is the world’s most comprehensive, information rich, easy to use, real-time Web tracking and analysis application.

Features include:

  • Live Tracking and Web Statistics
  • A rich user interface and client monitoring application
  • Real-time Analytics
  • Manage Multiple Blogs and Websites
  • Deep analytic and search capabilities
  • Click-to-chat
  • Visitor and member tagging
  • Real-time notifications
  • Easy Installation and Update Notification

Final notes

If you are into web analytics, I also encourage you to give the following a try: Nuconomy,ClickTale, Crazy Egg. And of course, Google analytics. Each of them (and also Woopra) strips you and your visitors a bit more from their privacy. But that is the ultimate price we pay for the strong Web analytics solutions that exists out there.
If you got any more statistics plugins I missed, feel encouraged to share them with me in the comments 🙂

Animation video of rgl in action

Duncan Murdoch just posted a youtube video presenting an animation clip of a 3d rgl object.

Duncan even went further and wrote an explanation on how he made the video:

here are the steps I used:
1.  Design a shape to be displayed, and then play with the animation functions to make it change over time.  Use play3d to do it live in R, movie3d to write the individual frames of the movie to .png files.
2.  Use the ffmpeg package (not an R package, a separate project at to convert the .png files to an .mp4 file.  The individual frames totalled about 1 GB; the compressed movie is about 45 MB.
3.  Upload to Youtube.  I’m not a musician, so I had to use one of their licensed background tracks, I couldn’t write my own.  I spent a lot of time picking one and then adjusting the timing of the video to compensate.  Each render/upload cycle at full resolution took about an hour and a half.  It’s a lot faster to render in a smaller window with fewer frames per second, but it’s still tedious.   It’s easier to synchronize if you actually have a copy of the music locally, but Youtube doesn’t let you download their music.  So the timing isn’t perfect, but it’s good enough for me!

Wonderful work Duncan, thanks for sharing!

Announcing a new R news site (for bloggers by bloggers)

I already wrote about R-bloggers on the R mailing list, so it only seems fitting to write about it more here. I will explain what R-bloggers is and then move to explain what I hope it will accomplish. is a central hub of content collected from bloggers who write about R (in English) and if you are an R blogger you can join it by filling in this form.

I built the site with the aspiration to  help R bloggers and users to connect and follow the “R blogosphere”. When I am writing these words, R-bloggers already has 17 blogs in it, and I hope for many (many) more.

How does R-Bloggers operate? This site aggregates feeds (only with permission!) from participating R blogs. The beginnings of each participating blog’s posts will automatically be displayed on the main page with links to the original posts; inside every post there is a link to the original blog and links to other related articles. While all participating blogs have links in the “Contributors” section of our sidebar

What does R-Bloggers offer it’s visitors?

  • Discover (for all): Find new R blogs you didn’t know about. And Search in them for content you want.
  • Follow (for people who don’t use RSS): Enter your e-mail and subscribe to receive a daily digest with teasers of new posts from participating blogs. You will more easily get a sense of hot topics in the R blogosphere.
  • Connect (for facebook users): Click on “Fan this site” to become a “fan” of R Bloggers. You can then “friend” other people and share thoughts on our wall. Or just by leaving comments on the blog.
  • Participate (for bloggers): Add your R blog to get increased visibility (for readers and search engines) with permanent links on our Contributors sidebar. Your blog will also gain visibility via our e-mail digest and through your presence on the main page with posts.

Who started R-Bloggers (and way)? R Bloggers was started by Tal Galili (well, me).  After searching for numerous R blogs I decided that there must be more R blogs our there then he knows about, and maybe the best way for finding them is to make them find him.

After writing about it in the R mailing list, I got some good feedbacks but also questions about why use only R blogs and not all the R feeds that exist. Who is the website actually for (when there are services like Google reader for us to read our feeds with), and what am I hoping it will do. So here is what I answered:

For me there are two audiences:
One is that of the web 2.0 power users. That is, people who know what RSS is and use it, maybe evern write their own blogs. These people have only one problem (as I see it) that R-bloggers tries to solve, and that is to know who else lives in their ecosystem. Who else they should follow.
For that, google reader recommendation system is great, but not enough. A much better system is if there was a one place where all R bloggers would go, write down their website, and all of us would know they exist. That is what R-bloggers offers for the power users. I think this is also why over 20 of them subscribed to the site RSS feed.
BTW, The origin of this idea came to me when I was trying to find all the dance bloggers for my wife (who is a dance researcher and blogger herself). After a while we started while knowing of only 10 bloggers. They list now has over 80 bloggers, most of which we would have not known about without this hub.
The same thing I am trying to do for the R community, that is way I hope more R bloggers would write about the service – so their network of readers which includes other R bloggers would add themselves and we will all know about them.
If that was my only purpose, a simple directory would have been enough. But I also have a second one and that is to help the second audience.

The second audience I am thinking of are people of our community who are not so much early adopters (and actually quite late adapters) of the new facilities that the new web (a.k.a: web 2.0) provides.
To them the all RSS thing is too much to look at, and they are used to e-mails. And because of that they are (until now) disconected from many of the R bloggers out there, simply because it is in-efficient for them to go through all these blogs each day (or even week). So for them, to see all the content in one place (and even get an e-mail about it) would be (I hope) a service. I believe that’s why 5 of them (so far) has subscribed via e-mail.
I also hope teachers will direct their students to this as a resource for getting a sense of what people who are using R are doing.
Another thing that hints me about the R community is seeing how the “facebook fan box” is still empty. Which tells me that (sadly) very few R users are actively using facebook as a means for connecting with the outer networks of people out there.

All I wrote also explains why R-bloggers will only take feeds of bloggers and only (as much as can be said) their posts that are centered around R (hence the website name 🙂 ).
It both follows what Gabor talked about – having a site who’s content is only about R. But also what I wish, which is to have “content” in the sense of articles to read (mostly). And not so much things like news feeds of wikipedia or new packages published.

I hope this post will both notify people about this new resource, encourage more R bloggers to join, and will help for future people to better understand what this R-Bloggers thing is all about 🙂

A web application for R's ggplot2

One of the exciting new frontiers for R programming is of creating website interfaces to R code. At the forefront of this domain is a young and (very) bright man called Jeroen Ooms, whom I had the pleasure of meeting at useR 2009 (press the link to see his presentation).

Today Jeroen announced a new version (0.11) of his web interface to ggplot2. See it here:

As Jeroen wrote:

New features include 1D geom’s (histogram, density, freqpoly), syntax mode (by clicking the tiny arrow at the bottom), and some additional facet options. And some minor improvements and fixes, most notably for Internet Explorer.
The data upload has not been improved yet, I am working on that. For now, it supports .csv, .sav (spss), and tab delimited data. Please make sure your filename has the appropriate extension and every column has a header in your data. If you export a dataframe from R, use:
write.csv(mydf, ”mydf.csv” , row.names=F). If you upload an spss
datafile, none of this should be a concern.
Supported browsers are IE6-8, FF, Safari, and Chrome, but a recent browser is highly recommended. As always, feedback is more than welcome.

Here is a little demo video that shows how to use the new features:

The datafile from the demo is available at

I wish the best to Jeroen, and hope to see many more such uses in the future.