Blogging about R – presentation and audio

At the useR!2010 conference I had the honor of giving a (~15 minute) talk titled “Blogging about R”. The following is the abstract I submited, followed by the slides of the talk and the audio file of a recording I made of the talk (I am sad it got a bit of “hall echo”, but it’s still listenable…)

P.S: this post does not absolve me from writing up something (with many thanks and links to people) about the useR2010 conference, but I can see it taking a bit longer till I do that.

—————–

Abstract of the talk

This talk is a basic introduction to blogs: why to blog, how to blog, and the importance of the R blogosphere to the R community.

Because R is an open-source project, the R community members rely (mostly) on each other’s help for statistical guidance, generating useful code, and general moral support.

Current online tools available for us to help each other include the R mailing lists, the community R-wiki, and the R blogosphere. The emerging R blogosphere is the only source, besides the R journal, that provides our community with articles about R. While these articles are not peer reviewed, they do come in higher volume (and often are of very high quality).

According to the meta-blog R-bloggers.com, the (English) R blogosphere has produced, in January 2010, about 115 “articles” about R. There are (currently) a bit over 50 bloggers (now about 100) who write about R, with about 1000 (now ~2200) subscribers who read them daily (through e-mails or RSS). These numbers allow me to believe that there is a genuine interest in our community for more people – perhaps you? – to start (and continue) blogging about R.

In this talk I intend to share knowledge about blogging so that more people are able to participate (freely) in the R blogosphere – both as readers and as writers. The talk will have three main parts:

  • What is a blog
  • How to blog – using the (free) blogging service WordPress.com (with specific emphasis on R)
  • How to develop readership – integration with other social media/networks platforms, SEO, and other best practices

* * *
Tal Galili founded www.R-bloggers.com and blogs on www.R-statistics.com
* * *

Audio recording of the talk

Continue reading “Blogging about R – presentation and audio”

Richard Stallman talk+Q&A at the useR! 2010 conference (audio files attached)

The audio files of the full talk by Richard Stallman are attached to the end of this post.

—————–

Videos of all the invited talks of the useR! 2010 conference can be viewed on the R User Group blog

—————–

Last week I had the honor of attending the talk given by Richard Stallman, the last keynote speaker on the useR 2010 conference.  In this post I will give a brief context for the talk, and then give the audio files of the talk, with some description of what was said in the talk.

Context for the talk

Richard Stallman can be viewed as (one of) the fathers of free software (free as in speech, not as in beer).

He is the man who led the GNU project for the creation of a free (as in speech, not as in beer) operation systems on the basis of which GNU-Linux, with its numerous distributions, was created.
Richard also developed a number of pieces of widely used software, including the original Emacs,[4] the GNU Compiler Collection,[5], the GNU Debugger[6], and many tools in the GNU Coreutils

Richard also initiated the free software movement and in October 1985 he also founded it’s formal foundation and co-founded the League for Programming Freedom in 1989.

Stallman pioneered the concept of “copyleft” and he is the main author of several copyleft licenses including the GNU General Public License, the most widely used free software license.

You can read about him in the wiki article titles “Richard Stallman

The useR 2010 conference is an annual 4 days conference of the community of people using R.  R is a free open source software for data analysis and statistical computing (Here is a bit more about what is R).

The conference this year was truly a wonderful experience for me.  I  had the pleasure of giving two talks (about which I will blog later this month), listened to numerous talks on the use of R, and had a chance to meet many (many) kind and interesting people.

Richard Stallmans talk

The talk took place on July 23rd 2010 at NIST U.S.  and was the concluding talk for the useR2010 conference.  The talk consisted of a two hour lecture followed by a half-hour question and answer session.

On a personal note, I was very impressed by Richards talk.  Richard is not a shy computer geek, but rather a serious leader and thinker trying to stir people to action.  His speech was a sermon on free software, the history of GNU-Linux, the various versions of GPL, and his own history involving them.

I believe this talk would be of interest to anyone who cares about social solidarity, free software, programming and the hope of a better world for all of us.

I am eager for your thoughts in the comments (but please keep a kind tone).

Here is Richard Stallmans  (2 hours) talk:

Continue reading “Richard Stallman talk+Q&A at the useR! 2010 conference (audio files attached)”

Want to join the closed BETA of a new Statistical Analysis Q&A site – NOW is the time!

The bottom line of this post is for you to go to:
Stack Exchange Q&A site proposal: Statistical Analysis
And commit yourself to using the website for asking and answering questions.

(And also consider giving the contender, MetaOptimize a visit)

* * * *

Statistical analysis Q&A website is about to go into BETA

A month ago I invited readers of this blog to commit to using a new Q&A website for Data-Analysis (based on StackOverFlow engine), once it will open (the site was originally proposed by Rob Hyndman).
And now, a month later, I am happy to write that over 500 people have shown interest in the website, and choose to commit themselves. This means we we have reached 100% completion of the website proposal process, and in the next few days we will move to the next step.

The next step is that the website will go into closed BETA for about a week. If you want to be part of this – now is the time to join (<--- call for action people). From being part in some other closed BETA of similar projects, I can attest that the enthusiasm of the people trying to answer questions in the BETA is very impressive, so I strongly recommend the experience. If you won't make it by the time you see this post, then no worries - about a week or so after the website will go online, it will be open to the wide public. (p.s: thanks Romunov for pointing out to me that the BETA is about to open)

p.s: MetaOptimize

I would like to finish this post with mentioning MetaOptimize. This is a Q&A website which is of a more “machine learning” then a “statistical” community. It also started out some short while ago, and already it has around 700 users who have submitted ~160 questions with ~520 answers given. From my experience on the site so far, I have enjoyed the high quality of the questions and answers.
When I first came by the website, I feared that supporting this website will split the R community of users between this website and the area 51 StackExchange website.
But after a lengthy discussion (published recently as a post) with MetaOptimize founder, Joseph Turian, I came to have a more optimistic view of the competition of the two websites. Where at first I was afraid, I am now hopeful that each of the two website will manage to draw a tiny bit of different communities of people (that would otherwise wouldn’t be present in the other website) – thus offering all of us a wider variety of knowledge to tap into.

See you there…

New versions for ggplot2 (0.8.8) and plyr (1.0) were released today

As prolific as the CRAN website is of packages, there are several packages to R that succeeds in standing out for their wide spread use (and quality), Hadley Wickhams ggplot2 and plyr are two such packages.
plyr image
And today (through twitter) Hadley has updates the rest of us with the news:

just released new versions of plyr and ggplot2. source versions available on cran, compiled will follow soon #rstats

Going to the CRAN website shows that plyr has gone through the most major update, with the last update (before the current one) taking place on 2009-06-23. And now, over a year later, we are presented with plyr version 1, which includes New functions, New features some Bug fixes and a much anticipated Speed improvements.
ggplot2, has made a tiny leap from version 0.8.7 to 0.8.8, and was previously last updated on 2010-03-03.

Me, and I am sure many R users are very thankful for the amazing work that Hadley Wickham is doing (both on his code, and with helping other useRs on the help lists). So Hadley, thank you!

Here is the complete change-log list for both packages:
Continue reading “New versions for ggplot2 (0.8.8) and plyr (1.0) were released today”

StackOverFlow and MetaOptimize are battling to be the #1 "Statistical Analysis Q&A website” – to whom would you signup?

A new statistical analysis Q&A website launched

While the proposal for a statistical analysis Q&A website on area51 (stackexchange) is taking it’s time, and the website is still collecting people who will commit to it,
Joseph Turian, who seems a nice guy from his various comments online, seem to feel this website is not what the community needs and that we shouldn’t hold up on our questions for the website to go online. Therefore, Joseph is pushing with all his might his newest creation “MetaOptimize QA“, a StackOverFlow like website for (long list follows): machine learning, natural language processing, artificial intelligence, text analysis, information retrieval, search, data mining, statistical modeling, and data visualization.
With all the bells and whistles that the OSQA framework (an open source stackoverflow clone, and more, system) can offer (you know, rankings, badges and so on).

Is this new website better then the area51 website? Will all the people go to just one of the two websites. or will we end up with two places that attracts more people then we had to begin with? These are the questions that come to mind when faced with the story in front of us.

My own suggestion is to try both websites (the stackoverflow statistical analysis website to come and “MetaOptimize QA“) and let time tell.

More info on this story bellow.

MetaOptimize online impact so far

The need for such a Q&A site is clearly evident. With just several days after being promoted online, MetaOptimize has claimed the eyes of almost 300 users, submitting 59 questions and 129 answers.
Already many bloggers in the statistical community have contributed their voices with encouraging posts, here is just a collection of the post I was able to find with some googling:

But is it goos to have two websites?

But wait, didn’t we just start pushing forward another statistical Q&A website two weeks ago?  I am talking about the Stack Exchange Q&A site proposal: Statistical Analysis.

So what should we (the community of statistical minded people) to do the next time we have a question?

Should we wait for Stack Exchange offer for a new website to start?  Or should we start using MetaOptimize?

Update: after lengthy e-mail exchange with Joseph (the person who founded MetaOptimize), I decided to erase what I originally wrote as my doubts, and instead give a Q&A session that him and I have had in the e-mails exchange.  It is a bit edited from what was originally, and some of the content will probably get updated – so if you are into this subject, check in again in a few hours 🙂


Honestly, I am split in two (and Joseph, I do hope you’ll take this in a positive way, since personally I feel confident you are a good guy).  I very strongly believe in the need and value of such a Q&A website.  Yet I am wondering how I feel about such a website being hosted as MetaOptimize and outside the hands of the stackoverflow guys.
On the one hand, open source lovers (like myself) tend to like decentralization and reliance on OSS (open source software) solutions (such as the one OSQA framework offers).  On the other hand, I do believe that the stackoverflow people  have (much) more experience in handling such websites then Joseph.  I can very easily trust them to do regular database backups, share the websites database dumps with the general community, smoothly test and upgrade to provide new features, and generally speaking perform in a more  experienced way with the online Q&A community.
It doesn’t mean that Joseph won’t do a great job, personally I hope he will.

Q&A session with Joseph Turian (MetaOptimize founder)

Tal: Let’s start with the easy question, should I worry about technical issues in the website (like, for example, backups)?

Joseph:

The OSQA team (backed by DZone) have got my back. They have been very helpful since day one to all OSQA users, and have given me a lot of support. Thanks, especially Rick and Hernani!

They provide email and chat support for OSQA users.

I will commit to putting up regular automatic database dumps, whenever the OSQA team implements it:
http://meta.osqa.net/questions/3120/how-do-i-offer-database-dumps
If, in six months, they don’t have this feature as part of their core, and someone (e.g. you) emails me reminding me that they want a dump, I will manually do a database dump and strip the user table.

Also, I’ve got a scheduled daily database dump that is mirrored to Amazon S3.

Tal: Why did you start MetaOptimize instead of supporting the area51 proposal?
Joseph:

  1. On Area51, people asked to have AI merged with ML, and ML merged with statistical analysis, but their requests seemed to be ignored. This seemed like a huge disservice to these communities.
  2. Area 51 didn’t have academics in ML + NLP. I know from experience it’s hard to get them to buy in to new technology. So why would I risk my reputation getting them to sign up for Area 51, when I know that I will get a 1% conversion? They aren’t early adopters interested in the process, many are late adopters who won’t sign up for something until they have too.
  3. If the Area 51 sites had a strong newbie bent, which is what it seemed like the direction was going, then the academic experts definitely wouldn’t waste their time. It would become a support
    community for newbies, without core expert discussion. So basically, I know that I and a lot of my colleagues wanted the site I built. And I felt like area 51 was shaping the communities really incorrectly in several respects, and was also taking a while.  I could have fought an institutional process and maybe gotten half the results above and it took a few months, or I could just build the site and invite my friends, and shape the community correctly.

Besides that, there are also personal motives:

  • I wanted the recognition for having a good vision for the community, and driving forward something they really like.
  • I wanted to experiment with some NLP and ML extensions for the Q+A software, to help organize the information better. Not possible on a closed platform.

Tal: Me (and maybe some other people) fear that this might fork the people in the field to two websites, instead of bringing them together. What are your thoughts about that?
Joseph:
How am I forking the community? I’m bringing a bunch of people in who wouldn’t have even been part of the Area 51 community.
Area 51 was going to fork it into five communities: stat analysis, ML, NLP, AI, and data mining. And then a lot fewer people would have been involved.

Tal: What are the things that people who support your website are saying?
Joseph:
Here are some quotes about my site:

Philip Resnick (UMD): “Looking at the questions being asked, the people responding, and the quality of the discussion, I can already see this becoming the go-to place for those ‘under the hood’ details
you rarely see in the textbooks or conference papers. This site is going to save a lot of people an awful lot of time and frustration.”

Aria Haghighi (Berkeley): “Both NLP and ML have a lot of folk wisdom about what works and what doesn’t. A site like this is crucial for facilitating the sharing and validation of this collective knowledge.”

Alexandre Passos (Unicamp): “Really thank you for that. As a machine learning phd student from somewhere far from most good research centers (I’m in brazil, and how many brazillian ML papers have you
seen in NIPS/ICML recently?), I struggle a lot with this folk wisdom. Most professors around here haven’t really interacted enough with the international ML community to be up to date”
(http://news.ycombinator.com/item?id=1476247)

Ryan McDonald (Google): “A tool like this will help disseminate and archive the tricks and best practices that are common in NLP/ML, but are rarely written about at length in papers.”

esoom on Reddit: “This is awesome. I’m really impressed by the quality of some of the answers, too. Within five minutes of skimming the site, I learned a neat trick that isn’t widely discussed in the literature.”
(http://www.reddit.com/r/MachineLearning/comments/ckw5k/stackoverflow_for_machine_learning_and_natural/c0tb3gc)

Tal: In order to be fair to area51 work, they have gotten wonderful responses for the “statistical analysis” proposal as well (see it here)
I have also contacted area51 directly and asked them and invited them to come and join the discussion. I’ll update this post with their reply.

So what’s next?

I don’t know.
If the Stack Exchange website where to launch today, I would probably focus on using it and hint to the site for MetaOptimize (for the reasons I just mentioned, and also for some that Rob Hyndman maintained when he first wrote on the subject).
If the stack exchange version of the website where to start in a few weeks, I would probably sit on the fence and see if people are using it.  I suspect that by that time, there wouldn’t be many people left to populate it (but I could always be wrong).
And what if the website where to start in a week, what then?  I have no clue.

Good question.
My current feeling is that I am glad to let this play out.
It seems this is a good case study for some healthy competition between platforms and models (OSQA vs stackoverflow/area51-system) – one that I hope will generate more good features from both companies. And also will make both parties work hard to get people to participate.
It also seems that this situation is getting many people in our field to be approached with the same idea (Q&A website). After Joseph input on the subject, I am starting to think that maybe at the end of the day this will benefit all of us. Instead of forking one community into two, maybe what we’ll end up with is getting more (experienced) people online (into two locations) that would otherwise would have stayed in the shadows.

The verdict is still out, but I am a bit more optimistic than I was when first writing this post. I’ll update this post after getting more input from people.

And as always – I would love to know your thoughts on the subject.

Visualization of regression coefficients (in R)

Update (07.07.10): The function in this post has a more mature version in the “arm” package. See at the end of this post for more details.
* * * *

Imagine you want to give a presentation or report of your latest findings running some sort of regression analysis. How would you do it?

This was exactly the question Wincent Rong-gui HUANG has recently asked on the R mailing list.

One person, Bernd Weiss, responded by linking to the chapter “Plotting Regression Coefficients” on an interesting online book (I have never heard of before) called “Using Graphs Instead of Tables” (I should add this link to the free statistics e-books list…)

Letter in the conversation, Achim Zeileis, has surprised us (well, me) saying the following

I’ve thought about adding a plot() method for the coeftest() function in the “lmtest” package. Essentially, it relies on a coef() and a vcov() method being available – and that a central limit theorem holds. For releasing it as a general function in the package the code is still too raw, but maybe it’s useful for someone on the list. Hence, I’ve included it below.

(I allowed myself to add some bolds in the text)

So for the convenience of all of us, I uploaded Achim’s code in a file for easy access. Here is an example of how to use it:

source("https://www.r-statistics.com/wp-content/uploads/2010/07/coefplot.r.txt")

data("Mroz", package = "car")
fm <- glm(lfp ~ ., data = Mroz, family = binomial)
coefplot(fm, parm = -1)

Here is the resulting graph:

I hope Achim will get around to improve the function so he might think it worthy of joining his"lmtest" package. I am glad he shared his code for the rest of us to have something to work with in the meantime 🙂

* * *

Update (07.07.10):
Thanks to a comment by David Atkins, I found out there is a more mature version of this function (called coefplot) inside the {arm} package. This version offers many features, one of which is the ability to easily stack several confidence intervals one on top of the other.

It works for baysglm, glm, lm, polr objects and a default method is available which takes pre-computed coefficients and associated standard errors from any suitable model.

Example:
(Notice that the Poisson model in comparison with the binomial models does not make much sense, but is enough to illustrate the use of the function)

library("arm")
data("Mroz", package = "car")
M1<-      glm(lfp ~ ., data = Mroz, family = binomial)
M2<- bayesglm(lfp ~ ., data = Mroz, family = binomial)
M3<-      glm(lfp ~ ., data = Mroz, family = binomial(probit))
coefplot(M2, xlim=c(-2, 6),            intercept=TRUE)
coefplot(M1, add=TRUE, col.pts="red",  intercept=TRUE)
coefplot(M3, add=TRUE, col.pts="blue", intercept=TRUE, offset=0.2)

(hat tip goes to Allan Engelhardt for help improving the code, and for Achim Zeileis in extending and improving the narration for the example)

Resulting plot

* * *
Lastly, another method worth mentioning is the Nomogram, implemented by Frank Harrell'a rms package.

Contest: Road Traffic Prediction for Intelligent GPS Navigation

About prize baring contests

Competition with prizes are an amazing thing. If you are not sure of that, I urge you to listened to Peter Diamandis talk about his experience with the X prize (start listening at minute 11:40):

At short – prizes can give up to 1 to 50 ratio of return on investment of the people giving funding to the prize. The money is spent only when results are achieved. And there is a lot of value in terms of public opinion and publicity. And the best of all (for the promoter of the competition) – prizes encourage people to take risks (at their own expense) in order to get results done.

All of that said, I look at prize baring competition as something worth spreading, especially in cases where the results of the winning team will be shared with the public.

About the IEEE ICDM Contest

The IEEE ICDM Contest (“Road Traffic Prediction for Intelligent GPS Navigation”), seems to be one of those cases. Due to a polite request, I am republishing here the details of this new competition, in the hope that some of my R colleagues will bring the community some pride 🙂
Continue reading “Contest: Road Traffic Prediction for Intelligent GPS Navigation”

ggplot2 GUI progress

(Written by Ian Fellows)
Below is a link to the first of a weekly (or bi-weekly) screen-cast vlog of my progress building a GUI for the ggplot2 package.
http://neolab.stat.ucla.edu/cranstats/gsoc_vlog1.mov
comments and suggestions are more than welcome…

(Written by Ian Fellows)

Below is a link to the first of a weekly (or bi-weekly) screen-cast vlog of my progress building a GUI for the ggplot2 package.

http://neolab.stat.ucla.edu/cranstats/gsoc_vlog1.mov

comments and suggestions are more than welcome, and can e-mailed to me at: [email protected]