New versions for ggplot2 (0.8.8) and plyr (1.0) were released today

As prolific as the CRAN website is of packages, there are several packages to R that succeeds in standing out for their wide spread use (and quality), Hadley Wickhams ggplot2 and plyr are two such packages.
plyr image
And today (through twitter) Hadley has updates the rest of us with the news:

just released new versions of plyr and ggplot2. source versions available on cran, compiled will follow soon #rstats

Going to the CRAN website shows that plyr has gone through the most major update, with the last update (before the current one) taking place on 2009-06-23. And now, over a year later, we are presented with plyr version 1, which includes New functions, New features some Bug fixes and a much anticipated Speed improvements.
ggplot2, has made a tiny leap from version 0.8.7 to 0.8.8, and was previously last updated on 2010-03-03.

Me, and I am sure many R users are very thankful for the amazing work that Hadley Wickham is doing (both on his code, and with helping other useRs on the help lists). So Hadley, thank you!

Here is the complete change-log list for both packages:
Continue reading “New versions for ggplot2 (0.8.8) and plyr (1.0) were released today”

StackOverFlow and MetaOptimize are battling to be the #1 "Statistical Analysis Q&A website” – to whom would you signup?

A new statistical analysis Q&A website launched

While the proposal for a statistical analysis Q&A website on area51 (stackexchange) is taking it’s time, and the website is still collecting people who will commit to it,
Joseph Turian, who seems a nice guy from his various comments online, seem to feel this website is not what the community needs and that we shouldn’t hold up on our questions for the website to go online. Therefore, Joseph is pushing with all his might his newest creation “MetaOptimize QA“, a StackOverFlow like website for (long list follows): machine learning, natural language processing, artificial intelligence, text analysis, information retrieval, search, data mining, statistical modeling, and data visualization.
With all the bells and whistles that the OSQA framework (an open source stackoverflow clone, and more, system) can offer (you know, rankings, badges and so on).

Is this new website better then the area51 website? Will all the people go to just one of the two websites. or will we end up with two places that attracts more people then we had to begin with? These are the questions that come to mind when faced with the story in front of us.

My own suggestion is to try both websites (the stackoverflow statistical analysis website to come and “MetaOptimize QA“) and let time tell.

More info on this story bellow.

MetaOptimize online impact so far

The need for such a Q&A site is clearly evident. With just several days after being promoted online, MetaOptimize has claimed the eyes of almost 300 users, submitting 59 questions and 129 answers.
Already many bloggers in the statistical community have contributed their voices with encouraging posts, here is just a collection of the post I was able to find with some googling:

But is it goos to have two websites?

But wait, didn’t we just start pushing forward another statistical Q&A website two weeks ago?  I am talking about the Stack Exchange Q&A site proposal: Statistical Analysis.

So what should we (the community of statistical minded people) to do the next time we have a question?

Should we wait for Stack Exchange offer for a new website to start?  Or should we start using MetaOptimize?

Update: after lengthy e-mail exchange with Joseph (the person who founded MetaOptimize), I decided to erase what I originally wrote as my doubts, and instead give a Q&A session that him and I have had in the e-mails exchange.  It is a bit edited from what was originally, and some of the content will probably get updated – so if you are into this subject, check in again in a few hours 🙂


Honestly, I am split in two (and Joseph, I do hope you’ll take this in a positive way, since personally I feel confident you are a good guy).  I very strongly believe in the need and value of such a Q&A website.  Yet I am wondering how I feel about such a website being hosted as MetaOptimize and outside the hands of the stackoverflow guys.
On the one hand, open source lovers (like myself) tend to like decentralization and reliance on OSS (open source software) solutions (such as the one OSQA framework offers).  On the other hand, I do believe that the stackoverflow people  have (much) more experience in handling such websites then Joseph.  I can very easily trust them to do regular database backups, share the websites database dumps with the general community, smoothly test and upgrade to provide new features, and generally speaking perform in a more  experienced way with the online Q&A community.
It doesn’t mean that Joseph won’t do a great job, personally I hope he will.

Q&A session with Joseph Turian (MetaOptimize founder)

Tal: Let’s start with the easy question, should I worry about technical issues in the website (like, for example, backups)?

Joseph:

The OSQA team (backed by DZone) have got my back. They have been very helpful since day one to all OSQA users, and have given me a lot of support. Thanks, especially Rick and Hernani!

They provide email and chat support for OSQA users.

I will commit to putting up regular automatic database dumps, whenever the OSQA team implements it:
http://meta.osqa.net/questions/3120/how-do-i-offer-database-dumps
If, in six months, they don’t have this feature as part of their core, and someone (e.g. you) emails me reminding me that they want a dump, I will manually do a database dump and strip the user table.

Also, I’ve got a scheduled daily database dump that is mirrored to Amazon S3.

Tal: Why did you start MetaOptimize instead of supporting the area51 proposal?
Joseph:

  1. On Area51, people asked to have AI merged with ML, and ML merged with statistical analysis, but their requests seemed to be ignored. This seemed like a huge disservice to these communities.
  2. Area 51 didn’t have academics in ML + NLP. I know from experience it’s hard to get them to buy in to new technology. So why would I risk my reputation getting them to sign up for Area 51, when I know that I will get a 1% conversion? They aren’t early adopters interested in the process, many are late adopters who won’t sign up for something until they have too.
  3. If the Area 51 sites had a strong newbie bent, which is what it seemed like the direction was going, then the academic experts definitely wouldn’t waste their time. It would become a support
    community for newbies, without core expert discussion. So basically, I know that I and a lot of my colleagues wanted the site I built. And I felt like area 51 was shaping the communities really incorrectly in several respects, and was also taking a while.  I could have fought an institutional process and maybe gotten half the results above and it took a few months, or I could just build the site and invite my friends, and shape the community correctly.

Besides that, there are also personal motives:

  • I wanted the recognition for having a good vision for the community, and driving forward something they really like.
  • I wanted to experiment with some NLP and ML extensions for the Q+A software, to help organize the information better. Not possible on a closed platform.

Tal: Me (and maybe some other people) fear that this might fork the people in the field to two websites, instead of bringing them together. What are your thoughts about that?
Joseph:
How am I forking the community? I’m bringing a bunch of people in who wouldn’t have even been part of the Area 51 community.
Area 51 was going to fork it into five communities: stat analysis, ML, NLP, AI, and data mining. And then a lot fewer people would have been involved.

Tal: What are the things that people who support your website are saying?
Joseph:
Here are some quotes about my site:

Philip Resnick (UMD): “Looking at the questions being asked, the people responding, and the quality of the discussion, I can already see this becoming the go-to place for those ‘under the hood’ details
you rarely see in the textbooks or conference papers. This site is going to save a lot of people an awful lot of time and frustration.”

Aria Haghighi (Berkeley): “Both NLP and ML have a lot of folk wisdom about what works and what doesn’t. A site like this is crucial for facilitating the sharing and validation of this collective knowledge.”

Alexandre Passos (Unicamp): “Really thank you for that. As a machine learning phd student from somewhere far from most good research centers (I’m in brazil, and how many brazillian ML papers have you
seen in NIPS/ICML recently?), I struggle a lot with this folk wisdom. Most professors around here haven’t really interacted enough with the international ML community to be up to date”
(http://news.ycombinator.com/item?id=1476247)

Ryan McDonald (Google): “A tool like this will help disseminate and archive the tricks and best practices that are common in NLP/ML, but are rarely written about at length in papers.”

esoom on Reddit: “This is awesome. I’m really impressed by the quality of some of the answers, too. Within five minutes of skimming the site, I learned a neat trick that isn’t widely discussed in the literature.”
(http://www.reddit.com/r/MachineLearning/comments/ckw5k/stackoverflow_for_machine_learning_and_natural/c0tb3gc)

Tal: In order to be fair to area51 work, they have gotten wonderful responses for the “statistical analysis” proposal as well (see it here)
I have also contacted area51 directly and asked them and invited them to come and join the discussion. I’ll update this post with their reply.

So what’s next?

I don’t know.
If the Stack Exchange website where to launch today, I would probably focus on using it and hint to the site for MetaOptimize (for the reasons I just mentioned, and also for some that Rob Hyndman maintained when he first wrote on the subject).
If the stack exchange version of the website where to start in a few weeks, I would probably sit on the fence and see if people are using it.  I suspect that by that time, there wouldn’t be many people left to populate it (but I could always be wrong).
And what if the website where to start in a week, what then?  I have no clue.

Good question.
My current feeling is that I am glad to let this play out.
It seems this is a good case study for some healthy competition between platforms and models (OSQA vs stackoverflow/area51-system) – one that I hope will generate more good features from both companies. And also will make both parties work hard to get people to participate.
It also seems that this situation is getting many people in our field to be approached with the same idea (Q&A website). After Joseph input on the subject, I am starting to think that maybe at the end of the day this will benefit all of us. Instead of forking one community into two, maybe what we’ll end up with is getting more (experienced) people online (into two locations) that would otherwise would have stayed in the shadows.

The verdict is still out, but I am a bit more optimistic than I was when first writing this post. I’ll update this post after getting more input from people.

And as always – I would love to know your thoughts on the subject.

Visualization of regression coefficients (in R)

Update (07.07.10): The function in this post has a more mature version in the “arm” package. See at the end of this post for more details.
* * * *

Imagine you want to give a presentation or report of your latest findings running some sort of regression analysis. How would you do it?

This was exactly the question Wincent Rong-gui HUANG has recently asked on the R mailing list.

One person, Bernd Weiss, responded by linking to the chapter “Plotting Regression Coefficients” on an interesting online book (I have never heard of before) called “Using Graphs Instead of Tables” (I should add this link to the free statistics e-books list…)

Letter in the conversation, Achim Zeileis, has surprised us (well, me) saying the following

I’ve thought about adding a plot() method for the coeftest() function in the “lmtest” package. Essentially, it relies on a coef() and a vcov() method being available – and that a central limit theorem holds. For releasing it as a general function in the package the code is still too raw, but maybe it’s useful for someone on the list. Hence, I’ve included it below.

(I allowed myself to add some bolds in the text)

So for the convenience of all of us, I uploaded Achim’s code in a file for easy access. Here is an example of how to use it:

source("https://www.r-statistics.com/wp-content/uploads/2010/07/coefplot.r.txt")

data("Mroz", package = "car")
fm <- glm(lfp ~ ., data = Mroz, family = binomial)
coefplot(fm, parm = -1)

Here is the resulting graph:

I hope Achim will get around to improve the function so he might think it worthy of joining his"lmtest" package. I am glad he shared his code for the rest of us to have something to work with in the meantime 🙂

* * *

Update (07.07.10):
Thanks to a comment by David Atkins, I found out there is a more mature version of this function (called coefplot) inside the {arm} package. This version offers many features, one of which is the ability to easily stack several confidence intervals one on top of the other.

It works for baysglm, glm, lm, polr objects and a default method is available which takes pre-computed coefficients and associated standard errors from any suitable model.

Example:
(Notice that the Poisson model in comparison with the binomial models does not make much sense, but is enough to illustrate the use of the function)

library("arm")
data("Mroz", package = "car")
M1<-      glm(lfp ~ ., data = Mroz, family = binomial)
M2<- bayesglm(lfp ~ ., data = Mroz, family = binomial)
M3<-      glm(lfp ~ ., data = Mroz, family = binomial(probit))
coefplot(M2, xlim=c(-2, 6),            intercept=TRUE)
coefplot(M1, add=TRUE, col.pts="red",  intercept=TRUE)
coefplot(M3, add=TRUE, col.pts="blue", intercept=TRUE, offset=0.2)

(hat tip goes to Allan Engelhardt for help improving the code, and for Achim Zeileis in extending and improving the narration for the example)

Resulting plot

* * *
Lastly, another method worth mentioning is the Nomogram, implemented by Frank Harrell'a rms package.

Contest: Road Traffic Prediction for Intelligent GPS Navigation

About prize baring contests

Competition with prizes are an amazing thing. If you are not sure of that, I urge you to listened to Peter Diamandis talk about his experience with the X prize (start listening at minute 11:40):

At short – prizes can give up to 1 to 50 ratio of return on investment of the people giving funding to the prize. The money is spent only when results are achieved. And there is a lot of value in terms of public opinion and publicity. And the best of all (for the promoter of the competition) – prizes encourage people to take risks (at their own expense) in order to get results done.

All of that said, I look at prize baring competition as something worth spreading, especially in cases where the results of the winning team will be shared with the public.

About the IEEE ICDM Contest

The IEEE ICDM Contest (“Road Traffic Prediction for Intelligent GPS Navigation”), seems to be one of those cases. Due to a polite request, I am republishing here the details of this new competition, in the hope that some of my R colleagues will bring the community some pride 🙂
Continue reading “Contest: Road Traffic Prediction for Intelligent GPS Navigation”

ggplot2 GUI progress

(Written by Ian Fellows)
Below is a link to the first of a weekly (or bi-weekly) screen-cast vlog of my progress building a GUI for the ggplot2 package.
http://neolab.stat.ucla.edu/cranstats/gsoc_vlog1.mov
comments and suggestions are more than welcome…

(Written by Ian Fellows)

Below is a link to the first of a weekly (or bi-weekly) screen-cast vlog of my progress building a GUI for the ggplot2 package.

http://neolab.stat.ucla.edu/cranstats/gsoc_vlog1.mov

comments and suggestions are more than welcome, and can e-mailed to me at: [email protected]

A new Q&A website for Data-Analysis (based on StackOverFlow engine) – is waiting for you

The bottom line of this post is for you to go to:
Stack Exchange Q&A site proposal: Statistical Analysis
And commit yourself to using the website for asking and answering questions.
144 peoples already committed to using the website, we need 356 more… 🙂
If you are looking for the reasons to do so – read on…

What is the StackOverFlow Q&A website about?

StackOverFlow.com (“SO” for short) is a programming Q & A site that’s free. Free to ask questions, free to answer questions, free to read. Free, And fast.

For the R community, SO offers a growing database of R related questions and answer (click the link to check them out).

You might be asking yourself what’s so special about SO over other available resources such as R mailing lists, R blogs, R wiki and so on?
That is a great question.

The answer is that SO succeeds in doing a great job synthesizing aspects of Wikis, Blogs, Forums, and Digg/Reddit to offer a very powerful Q&A website.

In SO, the new questions are like forum/blog posts (A main text with comments/answers). After someone answers a question, other users can give a thumb-up or a thumb-down to the answer (like digg/reddit). And all content can be edited, like a wiki page, by the users (provided the user has enough “karma points”).
You also get badges (“awards”) for a bunch of actions (like coming to the website every day for a month. Giving an answer that got X amount of thumb-ups and so on). The awards allows someone who is asking a question to see how much the person who had answered him has good reputation (in terms of acceptance/appreciation of his answers by other SO members).
It also offers a small (but effective) ego-boost for the person who gives answers.

So if StackOverFlow is so great – what is this new website you wrote about in the title?

Well, StackOverFlow has one limitation. It deals ONLY with programming questions. Other questions like:

  • Which of the following three graphics best displays this data set? Why?
  • Can you give an example of where I might prefer to use a z-test vs a t-test?
  • What is the relationship between Bayesian and neural networks?

Will not be answered, and the threads will get closed as being “off topic”. Why? because such questions are dealing with: statistics, data analysis, data mining, data visualization – But in no means in programming.

So there is no StackOverFlow-like Q&A website for data analysis… Until now!

In the past few weeks, Rob Hyndman and other users, have made much effort to push the creation of a new website, based on the StackOverFlow engine, to allow for statistically related Q&A.
His proposal for a new website is almost complete. All it need is for you (yes you), to go to the following link:
Stack Exchange Q&A site proposal: Statistical Analysis
And commit yourself to the website (that is, click the button called “commit” – so to declare that you will have interest in reading, asking and answering questions on such a website)

Once a few more tens 379 more people will commit – the website will go online!

Hope to see you there.

Clustergram: visualization and diagnostics for cluster analysis (R code)

About Clustergrams

In 2002, Matthias Schonlau published in “The Stata Journal” an article named “The Clustergram: A graph for visualizing hierarchical and . As explained in the abstract:

In hierarchical cluster analysis dendrogram graphs are used to visualize how clusters are formed. I propose an alternative graph named “clustergram” to examine how cluster members are assigned to clusters as the number of clusters increases.
This graph is useful in exploratory analysis for non-hierarchical clustering algorithms like k-means and for hierarchical cluster algorithms when the number of observations is large enough to make dendrograms impractical.

A similar article was later written and was (maybe) published in “computational statistics”.

Both articles gives some nice background to known methods like k-means and methods for hierarchical clustering, and then goes on to present examples of using these methods (with the Clustergarm) to analyse some datasets.

Personally, I understand the clustergram to be a type of parallel coordinates plot where each observation is given a vector. The vector contains the observation’s location according to how many clusters the dataset was split into. The scale of the vector is the scale of the first principal component of the data.

Clustergram in R (a basic function)

After finding out about this method of visualization, I was hunted by the curiosity to play with it a bit. Therefore, and since I didn’t find any implementation of the graph in R, I went about writing the code to implement it.

The code only works for kmeans, but it shows how such a plot can be produced, and could be later modified so to offer methods that will connect with different clustering algorithms.

How does the function work: The function I present here gets a data.frame/matrix with a row for each observation, and the variable dimensions present in the columns.
The function assumes the data is scaled.
The function then goes about calculating the cluster centers for our data, for varying number of clusters.
For each cluster iteration, the cluster centers are multiplied by the first loading of the principal components of the original data. Thus offering a weighted mean of the each cluster center dimensions that might give a decent representation of that cluster (this method has the known limitations of using the first component of a PCA for dimensionality reduction, but I won’t go into that in this post).
Finally all of our data points are ordered according to their respective cluster first component, and plotted against the number of clusters (thus creating the clustergram).

My thank goes to Hadley Wickham for offering some good tips on how to prepare the graph.

Here is the code (example follows)

The R function can be downloaded from here
Corrections and remarks can be added in the comments bellow, or on the github code page.

Example on the iris dataset

The iris data set is a favorite example of many R bloggers when writing about R accessors , Data Exporting, Data importing, and for different visualization techniques.
So it seemed only natural to experiment on it here.

source("https://www.r-statistics.com/wp-content/uploads/2012/01/source_https.r.txt") # Making sure we can source code from github
source_https("https://raw.github.com/talgalili/R-code-snippets/master/clustergram.r")

data(iris)
set.seed(250)
par(cex.lab = 1.5, cex.main = 1.2)
Data <- scale(iris[,-5]) # notice I am scaling the vectors)
clustergram(Data, k.range = 2:8, line.width = 0.004) # notice how I am using line.width.  Play with it on your problem, according to the scale of Y.

Here is the output:

Looking at the image we can notice a few interesting things. We notice that one of the clusters formed (the lower one) stays as is no matter how many clusters we are allowing (except for one observation that goes way and then beck).
We can also see that the second split is a solid one (in the sense that it splits the first cluster into two clusters which are not "close" to each other, and that about half the observations goes to each of the new clusters).
And then notice how moving to 5 clusters makes almost no difference.
Lastly, notice how when going for 8 clusters, we are practically left with 4 clusters (remember - this is according the mean of cluster centers by the loading of the first component of the PCA on the data)

If I where to take something from this graph, I would say I have a strong tendency to use 3-4 clusters on this data.

But wait, did our clustering algorithm do a stable job?
Let's try running the algorithm 6 more times (each run will have a different starting point for the clusters)

source("https://www.r-statistics.com/wp-content/uploads/2012/01/source_https.r.txt") # Making sure we can source code from github
source_https("https://raw.github.com/talgalili/R-code-snippets/master/clustergram.r")

set.seed(500)
Data <- scale(iris[,-5]) # notice I am scaling the vectors)
par(cex.lab = 1.2, cex.main = .7)
par(mfrow = c(3,2))
for(i in 1:6) clustergram(Data, k.range = 2:8 , line.width = .004, add.center.points = T)

Resulting with: (press the image to enlarge it)

Repeating the analysis offers even more insights.
First, it would appear that until 3 clusters, the algorithm gives rather stable results.
From 4 onwards we get various outcomes at each iteration.
At some of the cases, we got 3 clusters when we asked for 4 or even 5 clusters.

Reviewing the new plots, I would prefer to go with the 3 clusters option. Noting how the two "upper" clusters might have similar properties while the lower cluster is quite distinct from the other two.

By the way, the Iris data set is composed of three types of flowers. I imagine the kmeans had done a decent job in distinguishing the three.

Limitation of the method (and a possible way to overcome it?!)

It is worth noting that the current way the algorithm is built has a fundamental limitation: The plot is good for detecting a situation where there are several clusters but each of them is clearly "bigger" then the one before it (on the first principal component of the data).

For example, let's create a dataset with 3 clusters, each one is taken from a normal distribution with a higher mean:

source("https://www.r-statistics.com/wp-content/uploads/2012/01/source_https.r.txt") # Making sure we can source code from github
source_https("https://raw.github.com/talgalili/R-code-snippets/master/clustergram.r")

set.seed(250)
Data <- rbind(
				cbind(rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3)),
				cbind(rnorm(100,1, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,1, sd = 0.3)),
				cbind(rnorm(100,2, sd = 0.3),rnorm(100,2, sd = 0.3),rnorm(100,2, sd = 0.3))
				)
clustergram(Data, k.range = 2:5 , line.width = .004, add.center.points = T)

The resulting plot for this is the following:

The image shows a clear distinction between three ranks of clusters. There is no doubt (for me) from looking at this image, that three clusters would be the correct number of clusters.

But what if the clusters where different but didn't have an ordering to them?
For example, look at the following 4 dimensional data:

source("https://www.r-statistics.com/wp-content/uploads/2012/01/source_https.r.txt") # Making sure we can source code from github
source_https("https://raw.github.com/talgalili/R-code-snippets/master/clustergram.r")

set.seed(250)
Data <- rbind(
				cbind(rnorm(100,1, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3)),
				cbind(rnorm(100,0, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3)),
				cbind(rnorm(100,0, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,0, sd = 0.3)),
				cbind(rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,1, sd = 0.3))
				)
clustergram(Data, k.range = 2:8 , line.width = .004, add.center.points = T)

In this situation, it is not clear from the location of the clusters on the Y axis that we are dealing with 4 clusters.
But what is interesting, is that through the growing number of clusters, we can notice that there are 4 "strands" of data points moving more or less together (until we reached 4 clusters, at which point the clusters started breaking up).
Another hope for handling this might be using the color of the lines in some way, but I haven't yet figured out how.

Clustergram with ggplot2

Hadley Wickham has kindly played with recreating the clustergram using the ggplot2 engine. You can see the result here:
http://gist.github.com/439761
And this is what he wrote about it in the comments:

I’ve broken it down into three components:
* run the clustering algorithm and get predictions (many_kmeans and all_hclust)
* produce the data for the clustergram (clustergram)
* plot it (plot.clustergram)
I don’t think I have the logic behind the y-position adjustment quite right though.

Conclusions (some rules of thumb and questions for the future)

In a first look, it would appear that the clustergram can be of use. I can imagine using this graph to quickly run various clustering algorithms and then compare them to each other and review their stability (In the way I just demonstrated in the example above).

The three rules of thumb I have noticed by now are:

  1. Look at the location of the cluster points on the Y axis. See when they remain stable, when they start flying around, and what happens to them in higher number of clusters (do they re-group together)
  2. Observe the strands of the datapoints. Even if the clusters centers are not ordered, the lines for each item might (needs more research and thinking) tend to move together - hinting at the real number of clusters
  3. Run the plot multiple times to observe the stability of the cluster formation (and location)

Yet there is more work to be done and questions to seek answers to:

  • The code needs to be extended to offer methods to various clustering algorithms.
  • How can the colors of the lines be used better?
  • How can this be done using other graphical engines (ggplot2/lattice?) - (Update: look at Hadley's reply in the comments)
  • What to do in case the first principal component doesn't capture enough of the data? (maybe plot this graph to all the relevant components. but then - how do you make conclusions of it?)
  • What other uses/conclusions can be made based on this graph?

I am looking forward to reading your input/ideas in the comments (or in reply posts).

June 20, online Registration deadline for useR! 2010

useR!2010 is coming. I am going to give two talks there (I will write more of that soon), but in the meantime, please note that the online registration deadline is coming to an end.

This was published on the R-help mailing list today:

————-

The final registration deadline for the R User Conference is June 20,
2010, one week away.  Later registration will not be possible on site!

Conference webpage:  http://www.R-project.org/useR-2010
Conference program: http://www.R-project.org/useR-2010/program.html

Registration:
http://www.R-project.org/useR-2010/registration/registration.html

The conference is scheduled for July 21-23, 2010, and will take place at
the campus of the National Institute of Standards and Technology (NIST) in
Gaithersburg, Maryland, USA.

Continue reading “June 20, online Registration deadline for useR! 2010”

Could we run a statistical analysis on iPhone/iPad using R?

Updates (17.07.10 + 13.09.10 + 03.05.11)

03.05.2011: “Satisfaction blog” wrote about the idea to use iPhone with RStudio – great job Julyan!

I now came across David smith’s post on the REvolution blog, pointing to instruction on the R wiki for how to install R on the iPhone!
I didn’t try it myself since it both requires jailbreaking the iPhone, and I don’t have an iPhone. But it is still interesting to know of.

The blog “Computational Mathematics” recently published a post about a package on Cydia to ease R installation on iPhone, you can read it here: R on the iPhone.

Preface – I don’t use Mac

I don’t use Mac! Not that there is anything wrong with that, but I don’t use Mac…

Yet at the same time, wonderful people like my wife, my brother, my thesis advisor and even my mother-in-law – all use mac. So one can’t help but wonder if I might be missing out on something.

Still, for a Windows user like me it is a bit difficult to understand the hype around the iPhone 4 release:

Such releases tend to look to me more like this spoof video about the release of the apple “i”.

So while not using apples product, I have a deep respect for the impact it has made in peoples lives. Which begs the question: Could you use R on an iPhone (or an iPad) ??

Can R be run on iPhone/iPad ?

This question (and the motivation for this post) was raised in an R help mailing list thread a week ago.

After receiving permission from the threads author, I am republishing the content that was presented there in the hopes it might be of interest to other R community members.

And here is what “Marc Schwartz” wrote:
Continue reading “Could we run a statistical analysis on iPhone/iPad using R?”

Syncing files across computers using DropBox

Motivation

In the past few months I have been using DropBox for syncing my work files between my home and work computer. It has saved me from numerous mistakes and from sending the files to myself via e-mail.

Recently I found this service highly useful for sharing files with 4 other people with whom I am working on a data analysis project. Being so happy with it (and also by gaining more storage space by inviting friends to use it), I thought of sharing my experience here with other R users that might benefit from this cool (free) service.

What is Dropbox?

Dropbox is a Software/Web2.0 file hosting service which enable users to synchronize files and folders between computers across the internet.
This is done by installing a software and then picking a “shared folder” on your computer. From that moment on, that folder will be synced with any computer you choose to install the software on (for example, your home/work computer, your laptop – and so on)

http://www.youtube.com/watch?v=OFb0NaeRmdg

DropBox also enables users to share some of their folders with other DropBox users. This seamless integration of the service with your OS file system (Windows, Mac or Linux) is what’s making this service so comfortable, by allowing me to work with co-workers and have the same “project tree” of folders, all of which are always synced.

You could also share a file “online”, by getting a link to it which you could share with others. So for example, you could write an R code, share it online, and call to it later with source(). This is the easiest way I know of how to do this.

Dropbox is a “cloud computing” Web2.0 file hosting service offering both free and paid services. The free version (which I use) offers 2GB of “shared storage” (unless you invite other users, in which case you get some extended storage space. Which is one of my motivations in writing this post).

Dropbox has other non-trivial uses allowing one to:

The service’s major competitors are Box.net, Sugarsync and Mozy, non of which I have had the chance of trying.

How to start?

Simply go to: DropBox.com
Sign up, install the software, use the new shared folder, and let me know if it helped you 🙂

How to get Extra space?

You can:

  • Earn another 750MB of space by connecting your dropbox to your twitter/facebook account and sending a status update about them. To get this bonus, head over to “Get extra space free!” page.
  • Refer a friend to open a dropbox account (every friend joining earns you another 250MB of space). This bonus is bounded by a total of 8GB of added space (after that, you won’t be allowed any more extra space)
  • Upgrade – pay 10$ a month and get extra 50GB