建立 RDD(彈性分散式資料集)
從資料框:
mtrdd <- createDataFrame(sqlContext, mtcars)
來自 csv:
對於 csv,你需要在啟動 Spark 上下文之前將 csv 包新增到環境中:
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.4.0" "sparkr-shell"') # context for csv import read csv ->
sc <- sparkR.init()
sqlContext <- sparkRSQL.init(sc)
然後,你可以通過推斷列中資料的資料模式來載入 csv:
train <- read.df(sqlContext, "/train.csv", header= "true", source = "com.databricks.spark.csv", inferSchema = "true")
或者事先指定資料模式:
customSchema <- structType(
structField("margin", "integer"),
structField("gross", "integer"),
structField("name", "string"))
train <- read.df(sqlContext, "/train.csv", header= "true", source = "com.databricks.spark.csv", schema = customSchema)