A new version of ff released (version 2.2.0)

A few hours ago, Jens Oehlschlägel has announced on the R-help mailing list of the release of a new version of the ff package.

The ff package provides data structures that are stored on disk but behave (almost) as if they were in RAM by transparently mapping only a section (pagesize) in main memory – the effective virtual memory consumption per ff object.

Here are the new features of ff, as Jens wrote in his announcement:

—-
Dear R community,

The next release of package ff is available on CRAN. With kind help of Brian Ripley it now supports the Win64 and Sun versions of R. It has three major functional enhancements:

a) new fast in-memory sorting and ordering functions (single-threaded)
b) ff now supports on-disk sorting and ordering of ff vectors and ffdf dataframes
c) ff integer vectors now can be used as subscripts of ff vectors and ffdf dataframes

a) is achieved by careful implementation of NA-handling and exploiting context information
b) although permanently stored, sorting and ordering of ff objects can be faster than the standard routines in R
c) applying an order to ff vectors and ffdf dataframes is substantially slower than in pure R because it involves disk-access AND sorting index positions (to avoid random access).

There is still room for improvement, however, the current status should already be useful. I run some comparisons with SAS (see end of mail):
– both could sort German census size (81e6 rows) on a 3GB notebook
– ff sorts and orders faster on single columns
– sorting big multicolumn-tables is faster in SAS

Continue reading “A new version of ff released (version 2.2.0)”

R syntax highlighting for bloggers on WordPress.com

Announcing the ability to highlight R syntax in WordPress.com blogs, thanks to the recent work of Yihui Xie, Yoav Farhi and Andrew Redd.

Good news for R bloggers who are using WordPress.com to host their blog.

This week, the good people running WordPress.com (special thanks goes to Yoav Farhi), have added the ability for all the users of the WordPress.com platform to be able to highlight their R code inside posts.

Basically you’ll need to wrap the code in your post like this:

[sourcecode language="r"]
test.function = function(r) {
    return(pi * r^2)
}
test.function(1)
[/sourcecode]

(Which will then look like this:
r syntax highlighted code example
)

Further details (and other supported languages) can be read about on this WordPress.com support page.

This new feature was possible thanks to the work of Yihui Xie (who create the famous cool animation package for R), who created a R syntax brush for the syntaxhighlighter WordPress plugin (the plugin used by WordPress.com for sytnax highlighting) . And thanks should also go to Andrew Redd, the creator of NppToR (which connects between notepad++ to R). He both made some good suggestions, and was game to take on the brush creation in case there would be problems, which thankfully so far there aren’t any)

p.s: If you are a WordPress.org users (e.g: have a self hosted WordPress blog) and want to enable R syntax highlighting for your blog, I would recommend the use of the WP-Syntax plugin (enhanced with GeSHi version 1.0.8.6) which can be downloaded here.

Open source and money – why paying R developers might not always help the project

This post can be summed up by one two sentences: We can’t buy love.” “Starting to pay for love could make it disappear” while at the same time “We need money to live and love”. These two conflicting forces, with relation to open source, are the topic of this post.

This post is directed to the community of R users but is relevant to people of all open source projects. It deals with the question of open source projects and funding. Specifically, should a community of open source developers and users, once it exists, want to start raising/donating money to the main code contributers?

The conflict arises when, on the one side, we intuitively wish to repay the people who have helped us but worry of the implications of behavioral studies that suggests that doing so might destroy the motivation of the developers to continue working without contently getting payed, and that making the shift from doing something for one reason (whatever it is) to doing it for money, might not easily be turned back.
On the other side, developers needs to make a (good) living, and we (as a community) should strive for them to be well payed.
How can these two be reconciled?

This article won’t offer a decisive conclusions – and my hope is to invite discussion on the matter (from both amatures and professionals in the field of open source and behavioral economics) so to give more ideas for people to base their opinions on.

Update: this post was substantially updated from it’s original version, thanks to responses both in the comments, and especially in the e-mails. I apologies for writing a post that had needed so many corrections, and at the same time I am grateful for all the people who took the time to shed light in places where I was wrong.

* * * *

Motivation: R has issues – how do we get them fixed?

In the past two weeks there has been a raging debate regarding the future of R (hint: “what is R“). Without going deeper into the topic (I already wrote about it here, where you too can go and respond), I’ll sum up the issue with a quote from Ross Ihaka (one of the two founders of R) who recently wrote:

I’ve been worried for some time that R isn’t going to provide the base that we’re going to need for statistical computation in the future. (It may well be that the future is already upon us.) There are certainly efficiency problems (speed and memory use), but there are more fundamental issues too. Some of these were inherited from S and some are peculiar to R.

After this, several discussion threads where started around the web (for example: 0, 1, 2, 3, 4 ,5, 6 ), but then a comment was made in the R-help mailing list by Jaroslaw Piskorski who wrote:

A few days ago Tal Galili posted a message about some controversies concerning the future of R. Having read the discussions, especially those following Ross Ihaka’s post, I have come to the conclusion, that, as usual, the problem is money. I doubt there would be discussions about dropping R in its present form if the R-Foundation were properly funded and could hire computer scientists, programmers and statisticians. If a commercial company is able to provide big-database and multicore solutions, then so would a properly founded R-Foundation.

To which my response is that: I strongly disagree with this statement..
That is, I do agree that money could help with things. It could be that money could be a part of the solution. But I doubt that the core of this problem is money. Nor that it would be solved if we could only now hire “computer scientists, programmers and statisticians” (although that could be part of the solution).

And the reason I am doubtful stems from two sources:

Continue reading “Open source and money – why paying R developers might not always help the project”

Dumping functions from the global environment into an R script file

Looking at a project you didn’t touch for years poses many challenges. The less documentation and organization you had in your files, the more time you’ll have to spend tracing back what you did back when the code was written.

I just opened up such a project, that was before I ever knew to split my .r files to “data.r”, “functions.r”, “do.r”. All I have are several versions of an old .RData file and many .r files with a mix of functions and commands (oh the shame!)

One idea I had for the tracing back was to take the latest version of .RData I had, and see what functions I had in it’s environment. simply typing ls() wouldn’t work. Also, I wanted to have a list of all the functions that where defined in my .RData environment. Thanks to the code recently published by Richie Cotton, I was able to create the “save.functions.from.env”. This function will go through all your defined functions and write them into “d:\temp.r”.

I hope this might be useful to one of you in the future, here is the code to do it:

save.functions.from.env <- function(file = "d:\temp.r")
{
	# This function will go through all your defined functions and write them into "d:\temp.r"
	# let's get all the functions from the envoirnement:
	funs <- Filter(is.function, sapply(ls( ".GlobalEnv"), get))

	# Let's
	for(i in seq_along(funs))
	{
		cat(	# number the function we are about to add
			paste("n" , "#------ Function number ", i , "-----------------------------------" ,"n"),
			append = T, file = file
			)

		cat(	# print the function into the file
			paste(names(funs)[i] , "<-", paste(capture.output(funs[[i]]), collapse = "n"), collapse = "n"),
			append = T, file = file
			)

		cat(
			paste("n" , "#-----------------------------------------" ,"n"),
			append = T, file = file
			)
	}

	cat( # writing at the end of the file how many new functions where added to it
		paste("# A total of ", length(funs), " Functions where written into", file),
		append = T, file = file
		)
	print(paste("A total of ", length(funs), " Functions where written into", file))
}

# save.functions.from.env() # this is how you run it

Update: Joshua Ulrich gave on stackoverflow another solution for this challenge:

	newEnv <- new.env()
	load("myFunctions.Rdata", newEnv)
	dump(c(lsf.str(newEnv)), file="normalCodeFile.R", envir=newEnv)

And also suggested to look into ?prompt (which creates documentation files for objects) and / or ?package.skeleton.

Using the {plyr} (1.2) package parallel processing backend with windows

Hadley Wickham has just announced the release of a new R package “reshape2” which is (as Hadley wrote) “a reboot of the reshape package”. Alongside, Hadley announced the release of plyr 1.2.1 (now faster and with support to parallel computation!).
Both releases are exciting due to a significant speed increase they have now gained.

Yet in case of the new plyr package, an even more interesting new feature added is the introduction of the parallel processing backend.

    Reminder what is the `plyr` package all about

    (as written in Hadley’s announcement)

    plyr is a set of tools for a common set of problems: you need to __split__ up a big data structure into homogeneous pieces, __apply__ a function to each piece and then __combine__ all the results back together. For example, you might want to:

    • fit the same model each patient subsets of a data frame
    • quickly calculate summary statistics for each group
    • perform group-wise transformations like scaling or standardising

    It’s already possible to do this with base R functions (like split and the apply family of functions), but plyr makes it all a bit easier with:

    • totally consistent names, arguments and outputs
    • convenient parallelisation through the foreach package
    • input from and output to data.frames, matrices and lists
    • progress bars to keep track of long running operations
    • built-in error recovery, and informative error messages
    • labels that are maintained across all transformations

    Considerable effort has been put into making plyr fast and memory efficient, and in many cases plyr is as fast as, or faster than, the built-in functions.

    You can find out more at http://had.co.nz/plyr/, including a 20 page introductory guide, http://had.co.nz/plyr/plyr-intro.pdf.  You can ask questions about plyr (and data-manipulation in general) on the plyr mailing list. Sign up at http://groups.google.com/group/manipulatr

    What’s new in `plyr` (1.2.1)

    The exiting news about the release of the new plyr version is the added support for parallel processing.

    l*ply, d*ply, a*ply and m*ply all gain a .parallel argument that when TRUE, applies functions in parallel using a parallel backend registered with the
    foreach package.

    The new package also has some minor changes and bug fixes, all can be read here.

    In the original announcement by Hadley, he gave an example of using the new parallel backend with the doMC package for unix/linux.  For windows (the OS I’m using) you should use the doSMP package (as David mentioned in his post earlier today). However, this package is currently only released for “REvolution R” and not released yet for R 2.11 (see more about it here).  But due to the kind help of Tao Shi there is a solution for windows users wanting to have parallel processing backend to plyr in windows OS.

    All you need is to install the doSMP package, according to the instructions in the post “Parallel Multicore Processing with R (on Windows)“, and then use it like this:


    require(plyr) # make sure you have 1.2 or later installed
    x <- seq_len(20) wait <- function(i) Sys.sleep(0.1) system.time(llply(x, wait)) # user system elapsed # 0 0 2 require(doSMP) workers <- startWorkers(2) # My computer has 2 cores registerDoSMP(workers) system.time(llply(x, wait, .parallel = TRUE)) # user system elapsed # 0.09 0.00 1.11

    Update (03.09.2012): the above code will no longer work with updated versions of R (R 2.15 etc.)

    Trying to run it will result in the error massage:

    Loading required package: doSMP
    Warning message:
    In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE,  :
      there is no package called ‘doSMP’
    

    Because trying to install the package will give the error massage:

    > install.packages("doSMP")
    Installing package(s) into ‘D:/R/library’
    (as ‘lib’ is unspecified)
    Warning message:
    package ‘doSMP’ is not available (for R version 2.15.0)
    

    You can fix this be replacing the use of {doSMP} package with the {doParallel}+{foreach} packages. Here is how:

    if(!require(foreach)) install.packages("foreach")
    if(!require(doParallel)) install.packages("doParallel")
    # require(doSMP) # will no longer work...
    library(foreach)
    library(doParallel)
    workers <- makeCluster(2) # My computer has 2 cores
    registerDoParallel(workers)
    
    x <- seq_len(20)
    wait <- function(i) Sys.sleep(0.3)
    system.time(llply(x, wait)) # 6 sec
    system.time(llply(x, wait, .parallel = TRUE)) # 3.53 sec
    

    Tips for the R beginner (a 5 page overview)

    In this post I publish a PDF document titled “A collection of tips for R in Finance”.
    It is a basic 5 page introduction to R in finances by Arnaud Amsellem (linked in profile).

    The article offers tips related to the following points:

    • Code Editor
    • Organizing R code
    • Update packages
    • Getting external data into R
    • Communicating with external applications
    • Optimizing R code

    This article is well articulated, and offers a perspective of someone who is experienced in the field and touches points that I can imagine beginners might otherwise overlook. I hope publishing it here will be of use to some readers out there.

    Update: as some readers have noted to me (by e-mail, and by commenting), this document touches very lightly on the topic of “finances” in R. I therefore decided to update the title from “R in finance – some tips for beginners”, to it’s current form.

    Lastly: if you (a reader of this blog) feel you have an article (“post”) to contribute, but don’t feel like starting your own blog, feel welcome to contact me, and I’ll be glad to post what you have to say on my blog (and subsequently, also on R bloggers).

    Here is the article:
    Continue reading “Tips for the R beginner (a 5 page overview)”

    Rose plot using Deducers ggplot2 plot builder

    The (excellent!) LearnR blog had a post today about making a rose plot in
    ggplot2.

    Following today’s announcement, by Ian Fellows, regarding the release of the new version of Deducer (0.4) offering a strong support for ggplot2 using a GUI plot builder, Ian also sent an e-mail where he shows how to create a rose plot using the new ggplot2 GUI included in the latest version of Deducer. After the template is made, the plot can be generated with 4 clicks of the mouse.

    Here is a video tutorial (Ian published) to show how this can be used:

    The generated template file is available at:
    http://neolab.stat.ucla.edu/cranstats/rose.ggtmpl

    I am excited about the work Ian is doing, and hope to see more people publish use cases with Deducer.

    ggplot2 plot builder is now on CRAN! (through Deducer 0.4 GUI for R)

    Ian fellows, a hard working contributer to the R community (and a cool guy), has announced today the release of Deducer (0.4) to CRAN (scheduled to update in the next day or so).
    This major update also includes the release of a new plug-in package (DeducerExtras), containing additional dialogs and functionality.

    Following is the e-mail he sent out with all the details and demo videos.

    Continue reading “ggplot2 plot builder is now on CRAN! (through Deducer 0.4 GUI for R)”

    Richard Stallman talk+Q&A at the useR! 2010 conference (audio files attached)

    The audio files of the full talk by Richard Stallman are attached to the end of this post.

    —————–

    Videos of all the invited talks of the useR! 2010 conference can be viewed on the R User Group blog

    —————–

    Last week I had the honor of attending the talk given by Richard Stallman, the last keynote speaker on the useR 2010 conference.  In this post I will give a brief context for the talk, and then give the audio files of the talk, with some description of what was said in the talk.

    Context for the talk

    Richard Stallman can be viewed as (one of) the fathers of free software (free as in speech, not as in beer).

    He is the man who led the GNU project for the creation of a free (as in speech, not as in beer) operation systems on the basis of which GNU-Linux, with its numerous distributions, was created.
    Richard also developed a number of pieces of widely used software, including the original Emacs,[4] the GNU Compiler Collection,[5], the GNU Debugger[6], and many tools in the GNU Coreutils

    Richard also initiated the free software movement and in October 1985 he also founded it’s formal foundation and co-founded the League for Programming Freedom in 1989.

    Stallman pioneered the concept of “copyleft” and he is the main author of several copyleft licenses including the GNU General Public License, the most widely used free software license.

    You can read about him in the wiki article titles “Richard Stallman

    The useR 2010 conference is an annual 4 days conference of the community of people using R.  R is a free open source software for data analysis and statistical computing (Here is a bit more about what is R).

    The conference this year was truly a wonderful experience for me.  I  had the pleasure of giving two talks (about which I will blog later this month), listened to numerous talks on the use of R, and had a chance to meet many (many) kind and interesting people.

    Richard Stallmans talk

    The talk took place on July 23rd 2010 at NIST U.S.  and was the concluding talk for the useR2010 conference.  The talk consisted of a two hour lecture followed by a half-hour question and answer session.

    On a personal note, I was very impressed by Richards talk.  Richard is not a shy computer geek, but rather a serious leader and thinker trying to stir people to action.  His speech was a sermon on free software, the history of GNU-Linux, the various versions of GPL, and his own history involving them.

    I believe this talk would be of interest to anyone who cares about social solidarity, free software, programming and the hope of a better world for all of us.

    I am eager for your thoughts in the comments (but please keep a kind tone).

    Here is Richard Stallmans  (2 hours) talk:

    Continue reading “Richard Stallman talk+Q&A at the useR! 2010 conference (audio files attached)”

    Want to join the closed BETA of a new Statistical Analysis Q&A site – NOW is the time!

    The bottom line of this post is for you to go to:
    Stack Exchange Q&A site proposal: Statistical Analysis
    And commit yourself to using the website for asking and answering questions.

    (And also consider giving the contender, MetaOptimize a visit)

    * * * *

    Statistical analysis Q&A website is about to go into BETA

    A month ago I invited readers of this blog to commit to using a new Q&A website for Data-Analysis (based on StackOverFlow engine), once it will open (the site was originally proposed by Rob Hyndman).
    And now, a month later, I am happy to write that over 500 people have shown interest in the website, and choose to commit themselves. This means we we have reached 100% completion of the website proposal process, and in the next few days we will move to the next step.

    The next step is that the website will go into closed BETA for about a week. If you want to be part of this – now is the time to join (<--- call for action people). From being part in some other closed BETA of similar projects, I can attest that the enthusiasm of the people trying to answer questions in the BETA is very impressive, so I strongly recommend the experience. If you won't make it by the time you see this post, then no worries - about a week or so after the website will go online, it will be open to the wide public. (p.s: thanks Romunov for pointing out to me that the BETA is about to open)

    p.s: MetaOptimize

    I would like to finish this post with mentioning MetaOptimize. This is a Q&A website which is of a more “machine learning” then a “statistical” community. It also started out some short while ago, and already it has around 700 users who have submitted ~160 questions with ~520 answers given. From my experience on the site so far, I have enjoyed the high quality of the questions and answers.
    When I first came by the website, I feared that supporting this website will split the R community of users between this website and the area 51 StackExchange website.
    But after a lengthy discussion (published recently as a post) with MetaOptimize founder, Joseph Turian, I came to have a more optimistic view of the competition of the two websites. Where at first I was afraid, I am now hopeful that each of the two website will manage to draw a tiny bit of different communities of people (that would otherwise wouldn’t be present in the other website) – thus offering all of us a wider variety of knowledge to tap into.

    See you there…