创建 RDD(弹性分布式数据集)

从数据框:

mtrdd <- createDataFrame(sqlContext, mtcars)

来自 csv:

对于 csv,你需要在启动 Spark 上下文之前将 csv 包添加到环境中:

Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.4.0" "sparkr-shell"') # context for csv import read csv -> 
sc <- sparkR.init()
sqlContext <- sparkRSQL.init(sc)

然后,你可以通过推断列中数据的数据模式来加载 csv:

train <- read.df(sqlContext, "/train.csv", header= "true", source = "com.databricks.spark.csv", inferSchema = "true")

或者事先指定数据模式:

 customSchema <- structType(
    structField("margin", "integer"),
    structField("gross", "integer"),
    structField("name", "string"))

 train <- read.df(sqlContext, "/train.csv", header= "true", source = "com.databricks.spark.csv", schema = customSchema)