-
-
Save stefanthoss/33c2d1977e9adbd6b002348f8b3e6ed3 to your computer and use it in GitHub Desktop.
import json | |
from pyspark.sql.types import * | |
# Define the schema | |
schema = StructType( | |
[StructField("name", StringType(), True), StructField("age", IntegerType(), True)] | |
) | |
# Write the schema | |
with open("schema.json", "w") as f: | |
json.dump(schema.jsonValue(), f) | |
# Read the schema | |
with open("schema.json") as f: | |
new_schema = StructType.fromJson(json.load(f)) | |
print(new_schema.simpleString()) |
Thanks for the code samples dude.
Hi, I had an issue reproducing this code: new_schema = StructType.fromJson(json.load(f))
worked as: new_schema = StructType.fromJson(json.loads(f))
Hi, I had an issue reproducing this code: new_schema = StructType.fromJson(json.load(f))
worked as: new_schema = StructType.fromJson(json.loads(f))
json.load(fp)
is for deserializing a text or binary file fp
.
json.loads(s)
is for deserializing a string s
.
How to store the schema in json format in file in storage say azure storage file
How to store the schema in json format in file in storage say azure storage file
json.dumps(schema.jsonValue())
returns a string that contains the JSON representation of the schema. You can then use the Azure BlobClient
to upload that string as described in this guide from the Microsoft docs.
The schema of an existing DataFrame
df
can be written with: