summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/csv/arguments/io.rdoc5
-rw-r--r--doc/csv/options/common/col_sep.rdoc57
-rw-r--r--doc/csv/options/common/quote_char.rdoc42
-rw-r--r--doc/csv/options/common/row_sep.rdoc91
-rw-r--r--doc/csv/options/generating/force_quotes.rdoc17
-rw-r--r--doc/csv/options/generating/quote_empty.rdoc12
-rw-r--r--doc/csv/options/generating/write_converters.rdoc25
-rw-r--r--doc/csv/options/generating/write_empty_value.rdoc15
-rw-r--r--doc/csv/options/generating/write_headers.rdoc29
-rw-r--r--doc/csv/options/generating/write_nil_value.rdoc14
-rw-r--r--doc/csv/options/parsing/converters.rdoc46
-rw-r--r--doc/csv/options/parsing/empty_value.rdoc13
-rw-r--r--doc/csv/options/parsing/field_size_limit.rdoc39
-rw-r--r--doc/csv/options/parsing/header_converters.rdoc43
-rw-r--r--doc/csv/options/parsing/headers.rdoc63
-rw-r--r--doc/csv/options/parsing/liberal_parsing.rdoc38
-rw-r--r--doc/csv/options/parsing/nil_value.rdoc12
-rw-r--r--doc/csv/options/parsing/return_headers.rdoc22
-rw-r--r--doc/csv/options/parsing/skip_blanks.rdoc31
-rw-r--r--doc/csv/options/parsing/skip_lines.rdoc37
-rw-r--r--doc/csv/options/parsing/strip.rdoc15
-rw-r--r--doc/csv/options/parsing/unconverted_fields.rdoc27
-rw-r--r--doc/csv/recipes/filtering.rdoc158
-rw-r--r--doc/csv/recipes/generating.rdoc246
-rw-r--r--doc/csv/recipes/parsing.rdoc545
-rw-r--r--doc/csv/recipes/recipes.rdoc6
26 files changed, 0 insertions, 1648 deletions
diff --git a/doc/csv/arguments/io.rdoc b/doc/csv/arguments/io.rdoc
deleted file mode 100644
index f5fe1d1975..0000000000
--- a/doc/csv/arguments/io.rdoc
+++ /dev/null
@@ -1,5 +0,0 @@
-* Argument +io+ should be an IO object that is:
- * Open for reading; on return, the IO object will be closed.
- * Positioned at the beginning.
- To position at the end, for appending, use method CSV.generate.
- For any other positioning, pass a preset \StringIO object instead.
diff --git a/doc/csv/options/common/col_sep.rdoc b/doc/csv/options/common/col_sep.rdoc
deleted file mode 100644
index 3f23c6d2d3..0000000000
--- a/doc/csv/options/common/col_sep.rdoc
+++ /dev/null
@@ -1,57 +0,0 @@
-====== Option +col_sep+
-
-Specifies the \String field separator to be used
-for both parsing and generating.
-The \String will be transcoded into the data's \Encoding before use.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:col_sep) # => "," (comma)
-
-Using the default (comma):
- str = CSV.generate do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0\nbar,1\nbaz,2\n"
- ary = CSV.parse(str)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using +:+ (colon):
- col_sep = ':'
- str = CSV.generate(col_sep: col_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo:0\nbar:1\nbaz:2\n"
- ary = CSV.parse(str, col_sep: col_sep)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using +::+ (two colons):
- col_sep = '::'
- str = CSV.generate(col_sep: col_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo::0\nbar::1\nbaz::2\n"
- ary = CSV.parse(str, col_sep: col_sep)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using <tt>''</tt> (empty string):
- col_sep = ''
- str = CSV.generate(col_sep: col_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo0\nbar1\nbaz2\n"
-
----
-
-Raises an exception if parsing with the empty \String:
- col_sep = ''
- # Raises ArgumentError (:col_sep must be 1 or more characters: "")
- CSV.parse("foo0\nbar1\nbaz2\n", col_sep: col_sep)
-
diff --git a/doc/csv/options/common/quote_char.rdoc b/doc/csv/options/common/quote_char.rdoc
deleted file mode 100644
index 67fd3af68b..0000000000
--- a/doc/csv/options/common/quote_char.rdoc
+++ /dev/null
@@ -1,42 +0,0 @@
-====== Option +quote_char+
-
-Specifies the character (\String of length 1) used used to quote fields
-in both parsing and generating.
-This String will be transcoded into the data's \Encoding before use.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:quote_char) # => "\"" (double quote)
-
-This is useful for an application that incorrectly uses <tt>'</tt> (single-quote)
-to quote fields, instead of the correct <tt>"</tt> (double-quote).
-
-Using the default (double quote):
- str = CSV.generate do |csv|
- csv << ['foo', 0]
- csv << ["'bar'", 1]
- csv << ['"baz"', 2]
- end
- str # => "foo,0\n'bar',1\n\"\"\"baz\"\"\",2\n"
- ary = CSV.parse(str)
- ary # => [["foo", "0"], ["'bar'", "1"], ["\"baz\"", "2"]]
-
-Using <tt>'</tt> (single-quote):
- quote_char = "'"
- str = CSV.generate(quote_char: quote_char) do |csv|
- csv << ['foo', 0]
- csv << ["'bar'", 1]
- csv << ['"baz"', 2]
- end
- str # => "foo,0\n'''bar''',1\n\"baz\",2\n"
- ary = CSV.parse(str, quote_char: quote_char)
- ary # => [["foo", "0"], ["'bar'", "1"], ["\"baz\"", "2"]]
-
----
-
-Raises an exception if the \String length is greater than 1:
- # Raises ArgumentError (:quote_char has to be nil or a single character String)
- CSV.new('', quote_char: 'xx')
-
-Raises an exception if the value is not a \String:
- # Raises ArgumentError (:quote_char has to be nil or a single character String)
- CSV.new('', quote_char: :foo)
diff --git a/doc/csv/options/common/row_sep.rdoc b/doc/csv/options/common/row_sep.rdoc
deleted file mode 100644
index eae15b4a84..0000000000
--- a/doc/csv/options/common/row_sep.rdoc
+++ /dev/null
@@ -1,91 +0,0 @@
-====== Option +row_sep+
-
-Specifies the row separator, a \String or the \Symbol <tt>:auto</tt> (see below),
-to be used for both parsing and generating.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:row_sep) # => :auto
-
----
-
-When +row_sep+ is a \String, that \String becomes the row separator.
-The String will be transcoded into the data's Encoding before use.
-
-Using <tt>"\n"</tt>:
- row_sep = "\n"
- str = CSV.generate(row_sep: row_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0\nbar,1\nbaz,2\n"
- ary = CSV.parse(str)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using <tt>|</tt> (pipe):
- row_sep = '|'
- str = CSV.generate(row_sep: row_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0|bar,1|baz,2|"
- ary = CSV.parse(str, row_sep: row_sep)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using <tt>--</tt> (two hyphens):
- row_sep = '--'
- str = CSV.generate(row_sep: row_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0--bar,1--baz,2--"
- ary = CSV.parse(str, row_sep: row_sep)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using <tt>''</tt> (empty string):
- row_sep = ''
- str = CSV.generate(row_sep: row_sep) do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0bar,1baz,2"
- ary = CSV.parse(str, row_sep: row_sep)
- ary # => [["foo", "0bar", "1baz", "2"]]
-
----
-
-When +row_sep+ is the \Symbol +:auto+ (the default),
-generating uses <tt>"\n"</tt> as the row separator:
- str = CSV.generate do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0\nbar,1\nbaz,2\n"
-
-Parsing, on the other hand, invokes auto-discovery of the row separator.
-
-Auto-discovery reads ahead in the data looking for the next <tt>\r\n</tt>, +\n+, or +\r+ sequence.
-The sequence will be selected even if it occurs in a quoted field,
-assuming that you would have the same line endings there.
-
-Example:
- str = CSV.generate do |csv|
- csv << [:foo, 0]
- csv << [:bar, 1]
- csv << [:baz, 2]
- end
- str # => "foo,0\nbar,1\nbaz,2\n"
- ary = CSV.parse(str)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-The default <tt>$INPUT_RECORD_SEPARATOR</tt> (<tt>$/</tt>) is used
-if any of the following is true:
-* None of those sequences is found.
-* Data is +ARGF+, +STDIN+, +STDOUT+, or +STDERR+.
-* The stream is only available for output.
-
-Obviously, discovery takes a little time. Set manually if speed is important. Also note that IO objects should be opened in binary mode on Windows if this feature will be used as the line-ending translation can cause problems with resetting the document position to where it was before the read ahead.
diff --git a/doc/csv/options/generating/force_quotes.rdoc b/doc/csv/options/generating/force_quotes.rdoc
deleted file mode 100644
index 11afd1a16c..0000000000
--- a/doc/csv/options/generating/force_quotes.rdoc
+++ /dev/null
@@ -1,17 +0,0 @@
-====== Option +force_quotes+
-
-Specifies the boolean that determines whether each output field is to be double-quoted.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:force_quotes) # => false
-
-For examples in this section:
- ary = ['foo', 0, nil]
-
-Using the default, +false+:
- str = CSV.generate_line(ary)
- str # => "foo,0,\n"
-
-Using +true+:
- str = CSV.generate_line(ary, force_quotes: true)
- str # => "\"foo\",\"0\",\"\"\n"
diff --git a/doc/csv/options/generating/quote_empty.rdoc b/doc/csv/options/generating/quote_empty.rdoc
deleted file mode 100644
index 4c5645c662..0000000000
--- a/doc/csv/options/generating/quote_empty.rdoc
+++ /dev/null
@@ -1,12 +0,0 @@
-====== Option +quote_empty+
-
-Specifies the boolean that determines whether an empty value is to be double-quoted.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:quote_empty) # => true
-
-With the default +true+:
- CSV.generate_line(['"', ""]) # => "\"\"\"\",\"\"\n"
-
-With +false+:
- CSV.generate_line(['"', ""], quote_empty: false) # => "\"\"\"\",\n"
diff --git a/doc/csv/options/generating/write_converters.rdoc b/doc/csv/options/generating/write_converters.rdoc
deleted file mode 100644
index d1a9cc748f..0000000000
--- a/doc/csv/options/generating/write_converters.rdoc
+++ /dev/null
@@ -1,25 +0,0 @@
-====== Option +write_converters+
-
-Specifies converters to be used in generating fields.
-See {Write Converters}[#class-CSV-label-Write+Converters]
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:write_converters) # => nil
-
-With no write converter:
- str = CSV.generate_line(["\na\n", "\tb\t", " c "])
- str # => "\"\na\n\",\tb\t, c \n"
-
-With a write converter:
- strip_converter = proc {|field| field.strip }
- str = CSV.generate_line(["\na\n", "\tb\t", " c "], write_converters: strip_converter)
- str # => "a,b,c\n"
-
-With two write converters (called in order):
- upcase_converter = proc {|field| field.upcase }
- downcase_converter = proc {|field| field.downcase }
- write_converters = [upcase_converter, downcase_converter]
- str = CSV.generate_line(['a', 'b', 'c'], write_converters: write_converters)
- str # => "a,b,c\n"
-
-See also {Write Converters}[#class-CSV-label-Write+Converters]
diff --git a/doc/csv/options/generating/write_empty_value.rdoc b/doc/csv/options/generating/write_empty_value.rdoc
deleted file mode 100644
index 67be5662cb..0000000000
--- a/doc/csv/options/generating/write_empty_value.rdoc
+++ /dev/null
@@ -1,15 +0,0 @@
-====== Option +write_empty_value+
-
-Specifies the object that is to be substituted for each field
-that has an empty \String.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:write_empty_value) # => ""
-
-Without the option:
- str = CSV.generate_line(['a', '', 'c', ''])
- str # => "a,\"\",c,\"\"\n"
-
-With the option:
- str = CSV.generate_line(['a', '', 'c', ''], write_empty_value: "x")
- str # => "a,x,c,x\n"
diff --git a/doc/csv/options/generating/write_headers.rdoc b/doc/csv/options/generating/write_headers.rdoc
deleted file mode 100644
index c56aa48adb..0000000000
--- a/doc/csv/options/generating/write_headers.rdoc
+++ /dev/null
@@ -1,29 +0,0 @@
-====== Option +write_headers+
-
-Specifies the boolean that determines whether a header row is included in the output;
-ignored if there are no headers.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:write_headers) # => nil
-
-Without +write_headers+:
- file_path = 't.csv'
- CSV.open(file_path,'w',
- :headers => ['Name','Value']
- ) do |csv|
- csv << ['foo', '0']
- end
- CSV.open(file_path) do |csv|
- csv.shift
- end # => ["foo", "0"]
-
-With +write_headers+":
- CSV.open(file_path,'w',
- :write_headers => true,
- :headers => ['Name','Value']
- ) do |csv|
- csv << ['foo', '0']
- end
- CSV.open(file_path) do |csv|
- csv.shift
- end # => ["Name", "Value"]
diff --git a/doc/csv/options/generating/write_nil_value.rdoc b/doc/csv/options/generating/write_nil_value.rdoc
deleted file mode 100644
index 65d33ff54e..0000000000
--- a/doc/csv/options/generating/write_nil_value.rdoc
+++ /dev/null
@@ -1,14 +0,0 @@
-====== Option +write_nil_value+
-
-Specifies the object that is to be substituted for each +nil+-valued field.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:write_nil_value) # => nil
-
-Without the option:
- str = CSV.generate_line(['a', nil, 'c', nil])
- str # => "a,,c,\n"
-
-With the option:
- str = CSV.generate_line(['a', nil, 'c', nil], write_nil_value: "x")
- str # => "a,x,c,x\n"
diff --git a/doc/csv/options/parsing/converters.rdoc b/doc/csv/options/parsing/converters.rdoc
deleted file mode 100644
index 211fa48de6..0000000000
--- a/doc/csv/options/parsing/converters.rdoc
+++ /dev/null
@@ -1,46 +0,0 @@
-====== Option +converters+
-
-Specifies converters to be used in parsing fields.
-See {Field Converters}[#class-CSV-label-Field+Converters]
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:converters) # => nil
-
-The value may be a field converter name
-(see {Stored Converters}[#class-CSV-label-Stored+Converters]):
- str = '1,2,3'
- # Without a converter
- array = CSV.parse_line(str)
- array # => ["1", "2", "3"]
- # With built-in converter :integer
- array = CSV.parse_line(str, converters: :integer)
- array # => [1, 2, 3]
-
-The value may be a converter list
-(see {Converter Lists}[#class-CSV-label-Converter+Lists]):
- str = '1,3.14159'
- # Without converters
- array = CSV.parse_line(str)
- array # => ["1", "3.14159"]
- # With built-in converters
- array = CSV.parse_line(str, converters: [:integer, :float])
- array # => [1, 3.14159]
-
-The value may be a \Proc custom converter:
-(see {Custom Field Converters}[#class-CSV-label-Custom+Field+Converters]):
- str = ' foo , bar , baz '
- # Without a converter
- array = CSV.parse_line(str)
- array # => [" foo ", " bar ", " baz "]
- # With a custom converter
- array = CSV.parse_line(str, converters: proc {|field| field.strip })
- array # => ["foo", "bar", "baz"]
-
-See also {Custom Field Converters}[#class-CSV-label-Custom+Field+Converters]
-
----
-
-Raises an exception if the converter is not a converter name or a \Proc:
- str = 'foo,0'
- # Raises NoMethodError (undefined method `arity' for nil:NilClass)
- CSV.parse(str, converters: :foo)
diff --git a/doc/csv/options/parsing/empty_value.rdoc b/doc/csv/options/parsing/empty_value.rdoc
deleted file mode 100644
index 7d3bcc078c..0000000000
--- a/doc/csv/options/parsing/empty_value.rdoc
+++ /dev/null
@@ -1,13 +0,0 @@
-====== Option +empty_value+
-
-Specifies the object that is to be substituted
-for each field that has an empty \String.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:empty_value) # => "" (empty string)
-
-With the default, <tt>""</tt>:
- CSV.parse_line('a,"",b,"",c') # => ["a", "", "b", "", "c"]
-
-With a different object:
- CSV.parse_line('a,"",b,"",c', empty_value: 'x') # => ["a", "x", "b", "x", "c"]
diff --git a/doc/csv/options/parsing/field_size_limit.rdoc b/doc/csv/options/parsing/field_size_limit.rdoc
deleted file mode 100644
index 797c5776fc..0000000000
--- a/doc/csv/options/parsing/field_size_limit.rdoc
+++ /dev/null
@@ -1,39 +0,0 @@
-====== Option +field_size_limit+
-
-Specifies the \Integer field size limit.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:field_size_limit) # => nil
-
-This is a maximum size CSV will read ahead looking for the closing quote for a field.
-(In truth, it reads to the first line ending beyond this size.)
-If a quote cannot be found within the limit CSV will raise a MalformedCSVError,
-assuming the data is faulty.
-You can use this limit to prevent what are effectively DoS attacks on the parser.
-However, this limit can cause a legitimate parse to fail;
-therefore the default value is +nil+ (no limit).
-
-For the examples in this section:
- str = <<~EOT
- "a","b"
- "
- 2345
- ",""
- EOT
- str # => "\"a\",\"b\"\n\"\n2345\n\",\"\"\n"
-
-Using the default +nil+:
- ary = CSV.parse(str)
- ary # => [["a", "b"], ["\n2345\n", ""]]
-
-Using <tt>50</tt>:
- field_size_limit = 50
- ary = CSV.parse(str, field_size_limit: field_size_limit)
- ary # => [["a", "b"], ["\n2345\n", ""]]
-
----
-
-Raises an exception if a field is too long:
- big_str = "123456789\n" * 1024
- # Raises CSV::MalformedCSVError (Field size exceeded in line 1.)
- CSV.parse('valid,fields,"' + big_str + '"', field_size_limit: 2048)
diff --git a/doc/csv/options/parsing/header_converters.rdoc b/doc/csv/options/parsing/header_converters.rdoc
deleted file mode 100644
index 309180805f..0000000000
--- a/doc/csv/options/parsing/header_converters.rdoc
+++ /dev/null
@@ -1,43 +0,0 @@
-====== Option +header_converters+
-
-Specifies converters to be used in parsing headers.
-See {Header Converters}[#class-CSV-label-Header+Converters]
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:header_converters) # => nil
-
-Identical in functionality to option {converters}[#class-CSV-label-Option+converters]
-except that:
-- The converters apply only to the header row.
-- The built-in header converters are +:downcase+ and +:symbol+.
-
-This section assumes prior execution of:
- str = <<-EOT
- Name,Value
- foo,0
- bar,1
- baz,2
- EOT
- # With no header converter
- table = CSV.parse(str, headers: true)
- table.headers # => ["Name", "Value"]
-
-The value may be a header converter name
-(see {Stored Converters}[#class-CSV-label-Stored+Converters]):
- table = CSV.parse(str, headers: true, header_converters: :downcase)
- table.headers # => ["name", "value"]
-
-The value may be a converter list
-(see {Converter Lists}[#class-CSV-label-Converter+Lists]):
- header_converters = [:downcase, :symbol]
- table = CSV.parse(str, headers: true, header_converters: header_converters)
- table.headers # => [:name, :value]
-
-The value may be a \Proc custom converter
-(see {Custom Header Converters}[#class-CSV-label-Custom+Header+Converters]):
- upcase_converter = proc {|field| field.upcase }
- table = CSV.parse(str, headers: true, header_converters: upcase_converter)
- table.headers # => ["NAME", "VALUE"]
-
-See also {Custom Header Converters}[#class-CSV-label-Custom+Header+Converters]
-
diff --git a/doc/csv/options/parsing/headers.rdoc b/doc/csv/options/parsing/headers.rdoc
deleted file mode 100644
index 0ea151f24b..0000000000
--- a/doc/csv/options/parsing/headers.rdoc
+++ /dev/null
@@ -1,63 +0,0 @@
-====== Option +headers+
-
-Specifies a boolean, \Symbol, \Array, or \String to be used
-to define column headers.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:headers) # => false
-
----
-
-Without +headers+:
- str = <<-EOT
- Name,Count
- foo,0
- bar,1
- bax,2
- EOT
- csv = CSV.new(str)
- csv # => #<CSV io_type:StringIO encoding:UTF-8 lineno:0 col_sep:"," row_sep:"\n" quote_char:"\"">
- csv.headers # => nil
- csv.shift # => ["Name", "Count"]
-
----
-
-If set to +true+ or the \Symbol +:first_row+,
-the first row of the data is treated as a row of headers:
- str = <<-EOT
- Name,Count
- foo,0
- bar,1
- bax,2
- EOT
- csv = CSV.new(str, headers: true)
- csv # => #<CSV io_type:StringIO encoding:UTF-8 lineno:2 col_sep:"," row_sep:"\n" quote_char:"\"" headers:["Name", "Count"]>
- csv.headers # => ["Name", "Count"]
- csv.shift # => #<CSV::Row "Name":"bar" "Count":"1">
-
----
-
-If set to an \Array, the \Array elements are treated as headers:
- str = <<-EOT
- foo,0
- bar,1
- bax,2
- EOT
- csv = CSV.new(str, headers: ['Name', 'Count'])
- csv
- csv.headers # => ["Name", "Count"]
- csv.shift # => #<CSV::Row "Name":"bar" "Count":"1">
-
----
-
-If set to a \String +str+, method <tt>CSV::parse_line(str, options)</tt> is called
-with the current +options+, and the returned \Array is treated as headers:
- str = <<-EOT
- foo,0
- bar,1
- bax,2
- EOT
- csv = CSV.new(str, headers: 'Name,Count')
- csv
- csv.headers # => ["Name", "Count"]
- csv.shift # => #<CSV::Row "Name":"bar" "Count":"1">
diff --git a/doc/csv/options/parsing/liberal_parsing.rdoc b/doc/csv/options/parsing/liberal_parsing.rdoc
deleted file mode 100644
index 603de28613..0000000000
--- a/doc/csv/options/parsing/liberal_parsing.rdoc
+++ /dev/null
@@ -1,38 +0,0 @@
-====== Option +liberal_parsing+
-
-Specifies the boolean or hash value that determines whether
-CSV will attempt to parse input not conformant with RFC 4180,
-such as double quotes in unquoted fields.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:liberal_parsing) # => false
-
-For the next two examples:
- str = 'is,this "three, or four",fields'
-
-Without +liberal_parsing+:
- # Raises CSV::MalformedCSVError (Illegal quoting in str 1.)
- CSV.parse_line(str)
-
-With +liberal_parsing+:
- ary = CSV.parse_line(str, liberal_parsing: true)
- ary # => ["is", "this \"three", " or four\"", "fields"]
-
-Use the +backslash_quote+ sub-option to parse values that use
-a backslash to escape a double-quote character. This
-causes the parser to treat <code>\"</code> as if it were
-<code>""</code>.
-
-For the next two examples:
- str = 'Show,"Harry \"Handcuff\" Houdini, the one and only","Tampa Theater"'
-
-With +liberal_parsing+, but without the +backslash_quote+ sub-option:
- # Incorrect interpretation of backslash; incorrectly interprets the quoted comma as a field separator.
- ary = CSV.parse_line(str, liberal_parsing: true)
- ary # => ["Show", "\"Harry \\\"Handcuff\\\" Houdini", " the one and only\"", "Tampa Theater"]
- puts ary[1] # => "Harry \"Handcuff\" Houdini
-
-With +liberal_parsing+ and its +backslash_quote+ sub-option:
- ary = CSV.parse_line(str, liberal_parsing: { backslash_quote: true })
- ary # => ["Show", "Harry \"Handcuff\" Houdini, the one and only", "Tampa Theater"]
- puts ary[1] # => Harry "Handcuff" Houdini, the one and only
diff --git a/doc/csv/options/parsing/nil_value.rdoc b/doc/csv/options/parsing/nil_value.rdoc
deleted file mode 100644
index 412e8795e8..0000000000
--- a/doc/csv/options/parsing/nil_value.rdoc
+++ /dev/null
@@ -1,12 +0,0 @@
-====== Option +nil_value+
-
-Specifies the object that is to be substituted for each null (no-text) field.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:nil_value) # => nil
-
-With the default, +nil+:
- CSV.parse_line('a,,b,,c') # => ["a", nil, "b", nil, "c"]
-
-With a different object:
- CSV.parse_line('a,,b,,c', nil_value: 0) # => ["a", 0, "b", 0, "c"]
diff --git a/doc/csv/options/parsing/return_headers.rdoc b/doc/csv/options/parsing/return_headers.rdoc
deleted file mode 100644
index 45d2e3f3de..0000000000
--- a/doc/csv/options/parsing/return_headers.rdoc
+++ /dev/null
@@ -1,22 +0,0 @@
-====== Option +return_headers+
-
-Specifies the boolean that determines whether method #shift
-returns or ignores the header row.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:return_headers) # => false
-
-Examples:
- str = <<-EOT
- Name,Count
- foo,0
- bar,1
- bax,2
- EOT
- # Without return_headers first row is str.
- csv = CSV.new(str, headers: true)
- csv.shift # => #<CSV::Row "Name":"foo" "Count":"0">
- # With return_headers first row is headers.
- csv = CSV.new(str, headers: true, return_headers: true)
- csv.shift # => #<CSV::Row "Name":"Name" "Count":"Count">
-
diff --git a/doc/csv/options/parsing/skip_blanks.rdoc b/doc/csv/options/parsing/skip_blanks.rdoc
deleted file mode 100644
index 2c8f7b7bb8..0000000000
--- a/doc/csv/options/parsing/skip_blanks.rdoc
+++ /dev/null
@@ -1,31 +0,0 @@
-====== Option +skip_blanks+
-
-Specifies a boolean that determines whether blank lines in the input will be ignored;
-a line that contains a column separator is not considered to be blank.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:skip_blanks) # => false
-
-See also option {skiplines}[#class-CSV-label-Option+skip_lines].
-
-For examples in this section:
- str = <<-EOT
- foo,0
-
- bar,1
- baz,2
-
- ,
- EOT
-
-Using the default, +false+:
- ary = CSV.parse(str)
- ary # => [["foo", "0"], [], ["bar", "1"], ["baz", "2"], [], [nil, nil]]
-
-Using +true+:
- ary = CSV.parse(str, skip_blanks: true)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"], [nil, nil]]
-
-Using a truthy value:
- ary = CSV.parse(str, skip_blanks: :foo)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"], [nil, nil]]
diff --git a/doc/csv/options/parsing/skip_lines.rdoc b/doc/csv/options/parsing/skip_lines.rdoc
deleted file mode 100644
index 1481c40a5f..0000000000
--- a/doc/csv/options/parsing/skip_lines.rdoc
+++ /dev/null
@@ -1,37 +0,0 @@
-====== Option +skip_lines+
-
-Specifies an object to use in identifying comment lines in the input that are to be ignored:
-* If a \Regexp, ignores lines that match it.
-* If a \String, converts it to a \Regexp, ignores lines that match it.
-* If +nil+, no lines are considered to be comments.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:skip_lines) # => nil
-
-For examples in this section:
- str = <<-EOT
- # Comment
- foo,0
- bar,1
- baz,2
- # Another comment
- EOT
- str # => "# Comment\nfoo,0\nbar,1\nbaz,2\n# Another comment\n"
-
-Using the default, +nil+:
- ary = CSV.parse(str)
- ary # => [["# Comment"], ["foo", "0"], ["bar", "1"], ["baz", "2"], ["# Another comment"]]
-
-Using a \Regexp:
- ary = CSV.parse(str, skip_lines: /^#/)
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Using a \String:
- ary = CSV.parse(str, skip_lines: '#')
- ary # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
----
-
-Raises an exception if given an object that is not a \Regexp, a \String, or +nil+:
- # Raises ArgumentError (:skip_lines has to respond to #match: 0)
- CSV.parse(str, skip_lines: 0)
diff --git a/doc/csv/options/parsing/strip.rdoc b/doc/csv/options/parsing/strip.rdoc
deleted file mode 100644
index 56ae4310c3..0000000000
--- a/doc/csv/options/parsing/strip.rdoc
+++ /dev/null
@@ -1,15 +0,0 @@
-====== Option +strip+
-
-Specifies the boolean value that determines whether
-whitespace is stripped from each input field.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:strip) # => false
-
-With default value +false+:
- ary = CSV.parse_line(' a , b ')
- ary # => [" a ", " b "]
-
-With value +true+:
- ary = CSV.parse_line(' a , b ', strip: true)
- ary # => ["a", "b"]
diff --git a/doc/csv/options/parsing/unconverted_fields.rdoc b/doc/csv/options/parsing/unconverted_fields.rdoc
deleted file mode 100644
index 3e7f839d49..0000000000
--- a/doc/csv/options/parsing/unconverted_fields.rdoc
+++ /dev/null
@@ -1,27 +0,0 @@
-====== Option +unconverted_fields+
-
-Specifies the boolean that determines whether unconverted field values are to be available.
-
-Default value:
- CSV::DEFAULT_OPTIONS.fetch(:unconverted_fields) # => nil
-
-The unconverted field values are those found in the source data,
-prior to any conversions performed via option +converters+.
-
-When option +unconverted_fields+ is +true+,
-each returned row (\Array or \CSV::Row) has an added method,
-+unconverted_fields+, that returns the unconverted field values:
- str = <<-EOT
- foo,0
- bar,1
- baz,2
- EOT
- # Without unconverted_fields
- csv = CSV.parse(str, converters: :integer)
- csv # => [["foo", 0], ["bar", 1], ["baz", 2]]
- csv.first.respond_to?(:unconverted_fields) # => false
- # With unconverted_fields
- csv = CSV.parse(str, converters: :integer, unconverted_fields: true)
- csv # => [["foo", 0], ["bar", 1], ["baz", 2]]
- csv.first.respond_to?(:unconverted_fields) # => true
- csv.first.unconverted_fields # => ["foo", "0"]
diff --git a/doc/csv/recipes/filtering.rdoc b/doc/csv/recipes/filtering.rdoc
deleted file mode 100644
index 1552bf0fb8..0000000000
--- a/doc/csv/recipes/filtering.rdoc
+++ /dev/null
@@ -1,158 +0,0 @@
-== Recipes for Filtering \CSV
-
-These recipes are specific code examples for specific \CSV filtering tasks.
-
-For other recipes, see {Recipes for CSV}[./recipes_rdoc.html].
-
-All code snippets on this page assume that the following has been executed:
- require 'csv'
-
-=== Contents
-
-- {Source and Output Formats}[#label-Source+and+Output+Formats]
- - {Filtering String to String}[#label-Filtering+String+to+String]
- - {Recipe: Filter String to String with Headers}[#label-Recipe-3A+Filter+String+to+String+with+Headers]
- - {Recipe: Filter String to String Without Headers}[#label-Recipe-3A+Filter+String+to+String+Without+Headers]
- - {Filtering String to IO Stream}[#label-Filtering+String+to+IO+Stream]
- - {Recipe: Filter String to IO Stream with Headers}[#label-Recipe-3A+Filter+String+to+IO+Stream+with+Headers]
- - {Recipe: Filter String to IO Stream Without Headers}[#label-Recipe-3A+Filter+String+to+IO+Stream+Without+Headers]
- - {Filtering IO Stream to String}[#label-Filtering+IO+Stream+to+String]
- - {Recipe: Filter IO Stream to String with Headers}[#label-Recipe-3A+Filter+IO+Stream+to+String+with+Headers]
- - {Recipe: Filter IO Stream to String Without Headers}[#label-Recipe-3A+Filter+IO+Stream+to+String+Without+Headers]
- - {Filtering IO Stream to IO Stream}[#label-Filtering+IO+Stream+to+IO+Stream]
- - {Recipe: Filter IO Stream to IO Stream with Headers}[#label-Recipe-3A+Filter+IO+Stream+to+IO+Stream+with+Headers]
- - {Recipe: Filter IO Stream to IO Stream Without Headers}[#label-Recipe-3A+Filter+IO+Stream+to+IO+Stream+Without+Headers]
-
-=== Source and Output Formats
-
-You can use a Unix-style "filter" for \CSV data.
-The filter reads source \CSV data and writes output \CSV data as modified by the filter.
-The input and output \CSV data may be any mixture of \Strings and \IO streams.
-
-==== Filtering \String to \String
-
-You can filter one \String to another, with or without headers.
-
-===== Recipe: Filter \String to \String with Headers
-
-Use class method CSV.filter with option +headers+ to filter a \String to another \String:
- in_string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- out_string = ''
- CSV.filter(in_string, out_string, headers: true) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- out_string # => "Name,Value\nFOO,0000\nBAR,1111\nBAZ,2222\n"
-
-===== Recipe: Filter \String to \String Without Headers
-
-Use class method CSV.filter without option +headers+ to filter a \String to another \String:
- in_string = "foo,0\nbar,1\nbaz,2\n"
- out_string = ''
- CSV.filter(in_string, out_string) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- out_string # => "FOO,0000\nBAR,1111\nBAZ,2222\n"
-
-==== Filtering \String to \IO Stream
-
-You can filter a \String to an \IO stream, with or without headers.
-
-===== Recipe: Filter \String to \IO Stream with Headers
-
-Use class method CSV.filter with option +headers+ to filter a \String to an \IO stream:
- in_string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.open(path, 'w') do |out_io|
- CSV.filter(in_string, out_io, headers: true) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- end
- p File.read(path) # => "Name,Value\nFOO,0000\nBAR,1111\nBAZ,2222\n"
-
-===== Recipe: Filter \String to \IO Stream Without Headers
-
-Use class method CSV.filter without option +headers+ to filter a \String to an \IO stream:
- in_string = "foo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.open(path, 'w') do |out_io|
- CSV.filter(in_string, out_io) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- end
- p File.read(path) # => "FOO,0000\nBAR,1111\nBAZ,2222\n"
-
-==== Filtering \IO Stream to \String
-
-You can filter an \IO stream to a \String, with or without headers.
-
-===== Recipe: Filter \IO Stream to \String with Headers
-
-Use class method CSV.filter with option +headers+ to filter an \IO stream to a \String:
- in_string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.write(path, in_string)
- out_string = ''
- File.open(path, headers: true) do |in_io|
- CSV.filter(in_io, out_string, headers: true) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- end
- out_string # => "Name,Value\nFOO,0000\nBAR,1111\nBAZ,2222\n"
-
-===== Recipe: Filter \IO Stream to \String Without Headers
-
-Use class method CSV.filter without option +headers+ to filter an \IO stream to a \String:
- in_string = "foo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.write(path, in_string)
- out_string = ''
- File.open(path) do |in_io|
- CSV.filter(in_io, out_string) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- end
- out_string # => "FOO,0000\nBAR,1111\nBAZ,2222\n"
-
-==== Filtering \IO Stream to \IO Stream
-
-You can filter an \IO stream to another \IO stream, with or without headers.
-
-===== Recipe: Filter \IO Stream to \IO Stream with Headers
-
-Use class method CSV.filter with option +headers+ to filter an \IO stream to another \IO stream:
- in_path = 't.csv'
- in_string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- File.write(in_path, in_string)
- out_path = 'u.csv'
- File.open(in_path) do |in_io|
- File.open(out_path, 'w') do |out_io|
- CSV.filter(in_io, out_io, headers: true) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- end
- end
- p File.read(out_path) # => "Name,Value\nFOO,0000\nBAR,1111\nBAZ,2222\n"
-
-===== Recipe: Filter \IO Stream to \IO Stream Without Headers
-
-Use class method CSV.filter without option +headers+ to filter an \IO stream to another \IO stream:
- in_path = 't.csv'
- in_string = "foo,0\nbar,1\nbaz,2\n"
- File.write(in_path, in_string)
- out_path = 'u.csv'
- File.open(in_path) do |in_io|
- File.open(out_path, 'w') do |out_io|
- CSV.filter(in_io, out_io) do |row|
- row[0] = row[0].upcase
- row[1] *= 4
- end
- end
- end
- p File.read(out_path) # => "FOO,0000\nBAR,1111\nBAZ,2222\n"
diff --git a/doc/csv/recipes/generating.rdoc b/doc/csv/recipes/generating.rdoc
deleted file mode 100644
index e61838d31a..0000000000
--- a/doc/csv/recipes/generating.rdoc
+++ /dev/null
@@ -1,246 +0,0 @@
-== Recipes for Generating \CSV
-
-These recipes are specific code examples for specific \CSV generating tasks.
-
-For other recipes, see {Recipes for CSV}[./recipes_rdoc.html].
-
-All code snippets on this page assume that the following has been executed:
- require 'csv'
-
-=== Contents
-
-- {Output Formats}[#label-Output+Formats]
- - {Generating to a String}[#label-Generating+to+a+String]
- - {Recipe: Generate to String with Headers}[#label-Recipe-3A+Generate+to+String+with+Headers]
- - {Recipe: Generate to String Without Headers}[#label-Recipe-3A+Generate+to+String+Without+Headers]
- - {Generating to a File}[#label-Generating+to+a+File]
- - {Recipe: Generate to File with Headers}[#label-Recipe-3A+Generate+to+File+with+Headers]
- - {Recipe: Generate to File Without Headers}[#label-Recipe-3A+Generate+to+File+Without+Headers]
- - {Generating to IO an Stream}[#label-Generating+to+an+IO+Stream]
- - {Recipe: Generate to IO Stream with Headers}[#label-Recipe-3A+Generate+to+IO+Stream+with+Headers]
- - {Recipe: Generate to IO Stream Without Headers}[#label-Recipe-3A+Generate+to+IO+Stream+Without+Headers]
-- {Converting Fields}[#label-Converting+Fields]
- - {Recipe: Filter Generated Field Strings}[#label-Recipe-3A+Filter+Generated+Field+Strings]
- - {Recipe: Specify Multiple Write Converters}[#label-Recipe-3A+Specify+Multiple+Write+Converters]
-- {RFC 4180 Compliance}[#label-RFC+4180+Compliance]
- - {Row Separator}[#label-Row+Separator]
- - {Recipe: Generate Compliant Row Separator}[#label-Recipe-3A+Generate+Compliant+Row+Separator]
- - {Recipe: Generate Non-Compliant Row Separator}[#label-Recipe-3A+Generate+Non-Compliant+Row+Separator]
- - {Column Separator}[#label-Column+Separator]
- - {Recipe: Generate Compliant Column Separator}[#label-Recipe-3A+Generate+Compliant+Column+Separator]
- - {Recipe: Generate Non-Compliant Column Separator}[#label-Recipe-3A+Generate+Non-Compliant+Column+Separator]
- - {Quote Character}[#label-Quote+Character]
- - {Recipe: Generate Compliant Quote Character}[#label-Recipe-3A+Generate+Compliant+Quote+Character]
- - {Recipe: Generate Non-Compliant Quote Character}[#label-Recipe-3A+Generate+Non-Compliant+Quote+Character]
-
-=== Output Formats
-
-You can generate \CSV output to a \String, to a \File (via its path), or to an \IO stream.
-
-==== Generating to a \String
-
-You can generate \CSV output to a \String, with or without headers.
-
-===== Recipe: Generate to \String with Headers
-
-Use class method CSV.generate with option +headers+ to generate to a \String.
-
-This example uses method CSV#<< to append the rows
-that are to be generated:
- output_string = CSV.generate('', headers: ['Name', 'Value'], write_headers: true) do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "Name,Value\nFoo,0\nBar,1\nBaz,2\n"
-
-===== Recipe: Generate to \String Without Headers
-
-Use class method CSV.generate without option +headers+ to generate to a \String.
-
-This example uses method CSV#<< to append the rows
-that are to be generated:
- output_string = CSV.generate do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "Foo,0\nBar,1\nBaz,2\n"
-
-==== Generating to a \File
-
-You can generate /CSV data to a \File, with or without headers.
-
-===== Recipe: Generate to \File with Headers
-
-Use class method CSV.open with option +headers+ generate to a \File.
-
-This example uses method CSV#<< to append the rows
-that are to be generated:
- path = 't.csv'
- CSV.open(path, 'w', headers: ['Name', 'Value'], write_headers: true) do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- p File.read(path) # => "Name,Value\nFoo,0\nBar,1\nBaz,2\n"
-
-===== Recipe: Generate to \File Without Headers
-
-Use class method CSV.open without option +headers+ to generate to a \File.
-
-This example uses method CSV#<< to append the rows
-that are to be generated:
- path = 't.csv'
- CSV.open(path, 'w') do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- p File.read(path) # => "Foo,0\nBar,1\nBaz,2\n"
-
-==== Generating to an \IO Stream
-
-You can generate \CSV data to an \IO stream, with or without headers.
-
-==== Recipe: Generate to \IO Stream with Headers
-
-Use class method CSV.new with option +headers+ to generate \CSV data to an \IO stream:
- path = 't.csv'
- File.open(path, 'w') do |file|
- csv = CSV.new(file, headers: ['Name', 'Value'], write_headers: true)
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- p File.read(path) # => "Name,Value\nFoo,0\nBar,1\nBaz,2\n"
-
-===== Recipe: Generate to \IO Stream Without Headers
-
-Use class method CSV.new without option +headers+ to generate \CSV data to an \IO stream:
- path = 't.csv'
- File.open(path, 'w') do |file|
- csv = CSV.new(file)
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- p File.read(path) # => "Foo,0\nBar,1\nBaz,2\n"
-
-=== Converting Fields
-
-You can use _write_ _converters_ to convert fields when generating \CSV.
-
-==== Recipe: Filter Generated Field Strings
-
-Use option <tt>:write_converters</tt> and a custom converter to convert field values when generating \CSV.
-
-This example defines and uses a custom write converter to strip whitespace from generated fields:
- strip_converter = proc {|field| field.respond_to?(:strip) ? field.strip : field }
- output_string = CSV.generate(write_converters: strip_converter) do |csv|
- csv << [' foo ', 0]
- csv << [' bar ', 1]
- csv << [' baz ', 2]
- end
- output_string # => "foo,0\nbar,1\nbaz,2\n"
-
-==== Recipe: Specify Multiple Write Converters
-
-Use option <tt>:write_converters</tt> and multiple custom converters
-to convert field values when generating \CSV.
-
-This example defines and uses two custom write converters to strip and upcase generated fields:
- strip_converter = proc {|field| field.respond_to?(:strip) ? field.strip : field }
- upcase_converter = proc {|field| field.respond_to?(:upcase) ? field.upcase : field }
- converters = [strip_converter, upcase_converter]
- output_string = CSV.generate(write_converters: converters) do |csv|
- csv << [' foo ', 0]
- csv << [' bar ', 1]
- csv << [' baz ', 2]
- end
- output_string # => "FOO,0\nBAR,1\nBAZ,2\n"
-
-=== RFC 4180 Compliance
-
-By default, \CSV generates data that is compliant with
-{RFC 4180}[https://www.rfc-editor.org/rfc/rfc4180]
-with respect to:
-- Column separator.
-- Quote character.
-
-==== Row Separator
-
-RFC 4180 specifies the row separator CRLF (Ruby <tt>"\r\n"</tt>).
-
-===== Recipe: Generate Compliant Row Separator
-
-For strict compliance, use option +:row_sep+ to specify row separator <tt>"\r\n"</tt>:
- output_string = CSV.generate('', row_sep: "\r\n") do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "Foo,0\r\nBar,1\r\nBaz,2\r\n"
-
-===== Recipe: Generate Non-Compliant Row Separator
-
-For data with non-compliant row separators, use option +:row_sep+ with a different value:
-This example source uses semicolon (<tt>";'</tt>) as its row separator:
- output_string = CSV.generate('', row_sep: ";") do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "Foo,0;Bar,1;Baz,2;"
-
-==== Column Separator
-
-RFC 4180 specifies column separator COMMA (Ruby <tt>","</tt>).
-
-===== Recipe: Generate Compliant Column Separator
-
-Because the \CSV default comma separator is <tt>","</tt>,
-you need not specify option +:col_sep+ for compliant data:
- output_string = CSV.generate('') do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "Foo,0\nBar,1\nBaz,2\n"
-
-===== Recipe: Generate Non-Compliant Column Separator
-
-For data with non-compliant column separators, use option +:col_sep+.
-This example source uses TAB (<tt>"\t"</tt>) as its column separator:
- output_string = CSV.generate('', col_sep: "\t") do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "Foo\t0\nBar\t1\nBaz\t2\n"
-
-==== Quote Character
-
-RFC 4180 specifies quote character DQUOTE (Ruby <tt>"\""</tt>).
-
-===== Recipe: Generate Compliant Quote Character
-
-Because the \CSV default quote character is <tt>"\""</tt>,
-you need not specify option +:quote_char+ for compliant data:
- output_string = CSV.generate('', force_quotes: true) do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "\"Foo\",\"0\"\n\"Bar\",\"1\"\n\"Baz\",\"2\"\n"
-
-===== Recipe: Generate Non-Compliant Quote Character
-
-For data with non-compliant quote characters, use option +:quote_char+.
-This example source uses SQUOTE (<tt>"'"</tt>) as its quote character:
- output_string = CSV.generate('', quote_char: "'", force_quotes: true) do |csv|
- csv << ['Foo', 0]
- csv << ['Bar', 1]
- csv << ['Baz', 2]
- end
- output_string # => "'Foo','0'\n'Bar','1'\n'Baz','2'\n"
diff --git a/doc/csv/recipes/parsing.rdoc b/doc/csv/recipes/parsing.rdoc
deleted file mode 100644
index 1b7071e33f..0000000000
--- a/doc/csv/recipes/parsing.rdoc
+++ /dev/null
@@ -1,545 +0,0 @@
-== Recipes for Parsing \CSV
-
-These recipes are specific code examples for specific \CSV parsing tasks.
-
-For other recipes, see {Recipes for CSV}[./recipes_rdoc.html].
-
-All code snippets on this page assume that the following has been executed:
- require 'csv'
-
-=== Contents
-
-- {Source Formats}[#label-Source+Formats]
- - {Parsing from a String}[#label-Parsing+from+a+String]
- - {Recipe: Parse from String with Headers}[#label-Recipe-3A+Parse+from+String+with+Headers]
- - {Recipe: Parse from String Without Headers}[#label-Recipe-3A+Parse+from+String+Without+Headers]
- - {Parsing from a File}[#label-Parsing+from+a+File]
- - {Recipe: Parse from File with Headers}[#label-Recipe-3A+Parse+from+File+with+Headers]
- - {Recipe: Parse from File Without Headers}[#label-Recipe-3A+Parse+from+File+Without+Headers]
- - {Parsing from an IO Stream}[#label-Parsing+from+an+IO+Stream]
- - {Recipe: Parse from IO Stream with Headers}[#label-Recipe-3A+Parse+from+IO+Stream+with+Headers]
- - {Recipe: Parse from IO Stream Without Headers}[#label-Recipe-3A+Parse+from+IO+Stream+Without+Headers]
-- {RFC 4180 Compliance}[#label-RFC+4180+Compliance]
- - {Row Separator}[#label-Row+Separator]
- - {Recipe: Handle Compliant Row Separator}[#label-Recipe-3A+Handle+Compliant+Row+Separator]
- - {Recipe: Handle Non-Compliant Row Separator}[#label-Recipe-3A+Handle+Non-Compliant+Row+Separator]
- - {Column Separator}[#label-Column+Separator]
- - {Recipe: Handle Compliant Column Separator}[#label-Recipe-3A+Handle+Compliant+Column+Separator]
- - {Recipe: Handle Non-Compliant Column Separator}[#label-Recipe-3A+Handle+Non-Compliant+Column+Separator]
- - {Quote Character}[#label-Quote+Character]
- - {Recipe: Handle Compliant Quote Character}[#label-Recipe-3A+Handle+Compliant+Quote+Character]
- - {Recipe: Handle Non-Compliant Quote Character}[#label-Recipe-3A+Handle+Non-Compliant+Quote+Character]
- - {Recipe: Allow Liberal Parsing}[#label-Recipe-3A+Allow+Liberal+Parsing]
-- {Special Handling}[#label-Special+Handling]
- - {Special Line Handling}[#label-Special+Line+Handling]
- - {Recipe: Ignore Blank Lines}[#label-Recipe-3A+Ignore+Blank+Lines]
- - {Recipe: Ignore Selected Lines}[#label-Recipe-3A+Ignore+Selected+Lines]
- - {Special Field Handling}[#label-Special+Field+Handling]
- - {Recipe: Strip Fields}[#label-Recipe-3A+Strip+Fields]
- - {Recipe: Handle Null Fields}[#label-Recipe-3A+Handle+Null+Fields]
- - {Recipe: Handle Empty Fields}[#label-Recipe-3A+Handle+Empty+Fields]
-- {Converting Fields}[#label-Converting+Fields]
- - {Converting Fields to Objects}[#label-Converting+Fields+to+Objects]
- - {Recipe: Convert Fields to Integers}[#label-Recipe-3A+Convert+Fields+to+Integers]
- - {Recipe: Convert Fields to Floats}[#label-Recipe-3A+Convert+Fields+to+Floats]
- - {Recipe: Convert Fields to Numerics}[#label-Recipe-3A+Convert+Fields+to+Numerics]
- - {Recipe: Convert Fields to Dates}[#label-Recipe-3A+Convert+Fields+to+Dates]
- - {Recipe: Convert Fields to DateTimes}[#label-Recipe-3A+Convert+Fields+to+DateTimes]
- - {Recipe: Convert Assorted Fields to Objects}[#label-Recipe-3A+Convert+Assorted+Fields+to+Objects]
- - {Recipe: Convert Fields to Other Objects}[#label-Recipe-3A+Convert+Fields+to+Other+Objects]
- - {Recipe: Filter Field Strings}[#label-Recipe-3A+Filter+Field+Strings]
- - {Recipe: Register Field Converters}[#label-Recipe-3A+Register+Field+Converters]
- - {Using Multiple Field Converters}[#label-Using+Multiple+Field+Converters]
- - {Recipe: Specify Multiple Field Converters in Option :converters}[#label-Recipe-3A+Specify+Multiple+Field+Converters+in+Option+-3Aconverters]
- - {Recipe: Specify Multiple Field Converters in a Custom Converter List}[#label-Recipe-3A+Specify+Multiple+Field+Converters+in+a+Custom+Converter+List]
-- {Converting Headers}[#label-Converting+Headers]
- - {Recipe: Convert Headers to Lowercase}[#label-Recipe-3A+Convert+Headers+to+Lowercase]
- - {Recipe: Convert Headers to Symbols}[#label-Recipe-3A+Convert+Headers+to+Symbols]
- - {Recipe: Filter Header Strings}[#label-Recipe-3A+Filter+Header+Strings]
- - {Recipe: Register Header Converters}[#label-Recipe-3A+Register+Header+Converters]
- - {Using Multiple Header Converters}[#label-Using+Multiple+Header+Converters]
- - {Recipe: Specify Multiple Header Converters in Option :header_converters}[#label-Recipe-3A+Specify+Multiple+Header+Converters+in+Option+-3Aheader_converters]
- - {Recipe: Specify Multiple Header Converters in a Custom Header Converter List}[#label-Recipe-3A+Specify+Multiple+Header+Converters+in+a+Custom+Header+Converter+List]
-- {Diagnostics}[#label-Diagnostics]
- - {Recipe: Capture Unconverted Fields}[#label-Recipe-3A+Capture+Unconverted+Fields]
- - {Recipe: Capture Field Info}[#label-Recipe-3A+Capture+Field+Info]
-
-=== Source Formats
-
-You can parse \CSV data from a \String, from a \File (via its path), or from an \IO stream.
-
-==== Parsing from a \String
-
-You can parse \CSV data from a \String, with or without headers.
-
-===== Recipe: Parse from \String with Headers
-
-Use class method CSV.parse with option +headers+ to read a source \String all at once
-(may have memory resource implications):
- string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- CSV.parse(string, headers: true) # => #<CSV::Table mode:col_or_row row_count:4>
-
-Use instance method CSV#each with option +headers+ to read a source \String one row at a time:
- CSV.new(string, headers: true).each do |row|
- p row
- end
-Output:
- #<CSV::Row "Name":"foo" "Value":"0">
- #<CSV::Row "Name":"bar" "Value":"1">
- #<CSV::Row "Name":"baz" "Value":"2">
-
-===== Recipe: Parse from \String Without Headers
-
-Use class method CSV.parse without option +headers+ to read a source \String all at once
-(may have memory resource implications):
- string = "foo,0\nbar,1\nbaz,2\n"
- CSV.parse(string) # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Use instance method CSV#each without option +headers+ to read a source \String one row at a time:
- CSV.new(string).each do |row|
- p row
- end
-Output:
- ["foo", "0"]
- ["bar", "1"]
- ["baz", "2"]
-
-==== Parsing from a \File
-
-You can parse \CSV data from a \File, with or without headers.
-
-===== Recipe: Parse from \File with Headers
-
-Use instance method CSV#read with option +headers+ to read a file all at once:
- string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.write(path, string)
- CSV.read(path, headers: true) # => #<CSV::Table mode:col_or_row row_count:4>
-
-Use class method CSV.foreach with option +headers+ to read one row at a time:
- CSV.foreach(path, headers: true) do |row|
- p row
- end
-Output:
- #<CSV::Row "Name":"foo" "Value":"0">
- #<CSV::Row "Name":"bar" "Value":"1">
- #<CSV::Row "Name":"baz" "Value":"2">
-
-===== Recipe: Parse from \File Without Headers
-
-Use class method CSV.read without option +headers+ to read a file all at once:
- string = "foo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.write(path, string)
- CSV.read(path) # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Use class method CSV.foreach without option +headers+ to read one row at a time:
- CSV.foreach(path) do |row|
- p row
- end
-Output:
- ["foo", "0"]
- ["bar", "1"]
- ["baz", "2"]
-
-==== Parsing from an \IO Stream
-
-You can parse \CSV data from an \IO stream, with or without headers.
-
-===== Recipe: Parse from \IO Stream with Headers
-
-Use class method CSV.parse with option +headers+ to read an \IO stream all at once:
- string = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.write(path, string)
- File.open(path) do |file|
- CSV.parse(file, headers: true)
- end # => #<CSV::Table mode:col_or_row row_count:4>
-
-Use class method CSV.foreach with option +headers+ to read one row at a time:
- File.open(path) do |file|
- CSV.foreach(file, headers: true) do |row|
- p row
- end
- end
-Output:
- #<CSV::Row "Name":"foo" "Value":"0">
- #<CSV::Row "Name":"bar" "Value":"1">
- #<CSV::Row "Name":"baz" "Value":"2">
-
-===== Recipe: Parse from \IO Stream Without Headers
-
-Use class method CSV.parse without option +headers+ to read an \IO stream all at once:
- string = "foo,0\nbar,1\nbaz,2\n"
- path = 't.csv'
- File.write(path, string)
- File.open(path) do |file|
- CSV.parse(file)
- end # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-Use class method CSV.foreach without option +headers+ to read one row at a time:
- File.open(path) do |file|
- CSV.foreach(file) do |row|
- p row
- end
- end
-Output:
- ["foo", "0"]
- ["bar", "1"]
- ["baz", "2"]
-
-=== RFC 4180 Compliance
-
-By default, \CSV parses data that is compliant with
-{RFC 4180}[https://www.rfc-editor.org/rfc/rfc4180]
-with respect to:
-- Row separator.
-- Column separator.
-- Quote character.
-
-==== Row Separator
-
-RFC 4180 specifies the row separator CRLF (Ruby <tt>"\r\n"</tt>).
-
-Although the \CSV default row separator is <tt>"\n"</tt>,
-the parser also by default handles row separator <tt>"\r"</tt> and the RFC-compliant <tt>"\r\n"</tt>.
-
-===== Recipe: Handle Compliant Row Separator
-
-For strict compliance, use option +:row_sep+ to specify row separator <tt>"\r\n"</tt>,
-which allows the compliant row separator:
- source = "foo,1\r\nbar,1\r\nbaz,2\r\n"
- CSV.parse(source, row_sep: "\r\n") # => [["foo", "1"], ["bar", "1"], ["baz", "2"]]
-But rejects other row separators:
- source = "foo,1\nbar,1\nbaz,2\n"
- CSV.parse(source, row_sep: "\r\n") # Raised MalformedCSVError
- source = "foo,1\rbar,1\rbaz,2\r"
- CSV.parse(source, row_sep: "\r\n") # Raised MalformedCSVError
- source = "foo,1\n\rbar,1\n\rbaz,2\n\r"
- CSV.parse(source, row_sep: "\r\n") # Raised MalformedCSVError
-
-===== Recipe: Handle Non-Compliant Row Separator
-
-For data with non-compliant row separators, use option +:row_sep+.
-This example source uses semicolon (<tt>";"</tt>) as its row separator:
- source = "foo,1;bar,1;baz,2;"
- CSV.parse(source, row_sep: ';') # => [["foo", "1"], ["bar", "1"], ["baz", "2"]]
-
-==== Column Separator
-
-RFC 4180 specifies column separator COMMA (Ruby <tt>","</tt>).
-
-===== Recipe: Handle Compliant Column Separator
-
-Because the \CSV default comma separator is ',',
-you need not specify option +:col_sep+ for compliant data:
- source = "foo,1\nbar,1\nbaz,2\n"
- CSV.parse(source) # => [["foo", "1"], ["bar", "1"], ["baz", "2"]]
-
-===== Recipe: Handle Non-Compliant Column Separator
-
-For data with non-compliant column separators, use option +:col_sep+.
-This example source uses TAB (<tt>"\t"</tt>) as its column separator:
- source = "foo,1\tbar,1\tbaz,2"
- CSV.parse(source, col_sep: "\t") # => [["foo", "1"], ["bar", "1"], ["baz", "2"]]
-
-==== Quote Character
-
-RFC 4180 specifies quote character DQUOTE (Ruby <tt>"\""</tt>).
-
-===== Recipe: Handle Compliant Quote Character
-
-Because the \CSV default quote character is <tt>"\""</tt>,
-you need not specify option +:quote_char+ for compliant data:
- source = "\"foo\",\"1\"\n\"bar\",\"1\"\n\"baz\",\"2\"\n"
- CSV.parse(source) # => [["foo", "1"], ["bar", "1"], ["baz", "2"]]
-
-===== Recipe: Handle Non-Compliant Quote Character
-
-For data with non-compliant quote characters, use option +:quote_char+.
-This example source uses SQUOTE (<tt>"'"</tt>) as its quote character:
- source = "'foo','1'\n'bar','1'\n'baz','2'\n"
- CSV.parse(source, quote_char: "'") # => [["foo", "1"], ["bar", "1"], ["baz", "2"]]
-
-==== Recipe: Allow Liberal Parsing
-
-Use option +:liberal_parsing+ to specify that \CSV should
-attempt to parse input not conformant with RFC 4180, such as double quotes in unquoted fields:
- source = 'is,this "three, or four",fields'
- CSV.parse(source) # Raises MalformedCSVError
- CSV.parse(source, liberal_parsing: true) # => [["is", "this \"three", " or four\"", "fields"]]
-
-=== Special Handling
-
-You can use parsing options to specify special handling for certain lines and fields.
-
-==== Special Line Handling
-
-Use parsing options to specify special handling for blank lines, or for other selected lines.
-
-===== Recipe: Ignore Blank Lines
-
-Use option +:skip_blanks+ to ignore blank lines:
- source = <<-EOT
- foo,0
-
- bar,1
- baz,2
-
- ,
- EOT
- parsed = CSV.parse(source, skip_blanks: true)
- parsed # => [["foo", "0"], ["bar", "1"], ["baz", "2"], [nil, nil]]
-
-===== Recipe: Ignore Selected Lines
-
-Use option +:skip_lines+ to ignore selected lines.
- source = <<-EOT
- # Comment
- foo,0
- bar,1
- baz,2
- # Another comment
- EOT
- parsed = CSV.parse(source, skip_lines: /^#/)
- parsed # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-
-==== Special Field Handling
-
-Use parsing options to specify special handling for certain field values.
-
-===== Recipe: Strip Fields
-
-Use option +:strip+ to strip parsed field values:
- CSV.parse_line(' a , b ', strip: true) # => ["a", "b"]
-
-===== Recipe: Handle Null Fields
-
-Use option +:nil_value+ to specify a value that will replace each field
-that is null (no text):
- CSV.parse_line('a,,b,,c', nil_value: 0) # => ["a", 0, "b", 0, "c"]
-
-===== Recipe: Handle Empty Fields
-
-Use option +:empty_value+ to specify a value that will replace each field
-that is empty (\String of length 0);
- CSV.parse_line('a,"",b,"",c', empty_value: 'x') # => ["a", "x", "b", "x", "c"]
-
-=== Converting Fields
-
-You can use field converters to change parsed \String fields into other objects,
-or to otherwise modify the \String fields.
-
-==== Converting Fields to Objects
-
-Use field converters to change parsed \String objects into other, more specific, objects.
-
-There are built-in field converters for converting to objects of certain classes:
-- \Float
-- \Integer
-- \Date
-- \DateTime
-
-Other built-in field converters include:
-- +:numeric+: converts to \Integer and \Float.
-- +:all+: converts to \DateTime, \Integer, \Float.
-
-You can also define field converters to convert to objects of other classes.
-
-===== Recipe: Convert Fields to Integers
-
-Convert fields to \Integer objects using built-in converter +:integer+:
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, converters: :integer)
- parsed.map {|row| row['Value'].class} # => [Integer, Integer, Integer]
-
-===== Recipe: Convert Fields to Floats
-
-Convert fields to \Float objects using built-in converter +:float+:
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, converters: :float)
- parsed.map {|row| row['Value'].class} # => [Float, Float, Float]
-
-===== Recipe: Convert Fields to Numerics
-
-Convert fields to \Integer and \Float objects using built-in converter +:numeric+:
- source = "Name,Value\nfoo,0\nbar,1.1\nbaz,2.2\n"
- parsed = CSV.parse(source, headers: true, converters: :numeric)
- parsed.map {|row| row['Value'].class} # => [Integer, Float, Float]
-
-===== Recipe: Convert Fields to Dates
-
-Convert fields to \Date objects using built-in converter +:date+:
- source = "Name,Date\nfoo,2001-02-03\nbar,2001-02-04\nbaz,2001-02-03\n"
- parsed = CSV.parse(source, headers: true, converters: :date)
- parsed.map {|row| row['Date'].class} # => [Date, Date, Date]
-
-===== Recipe: Convert Fields to DateTimes
-
-Convert fields to \DateTime objects using built-in converter +:date_time+:
- source = "Name,DateTime\nfoo,2001-02-03\nbar,2001-02-04\nbaz,2020-05-07T14:59:00-05:00\n"
- parsed = CSV.parse(source, headers: true, converters: :date_time)
- parsed.map {|row| row['DateTime'].class} # => [DateTime, DateTime, DateTime]
-
-===== Recipe: Convert Assorted Fields to Objects
-
-Convert assorted fields to objects using built-in converter +:all+:
- source = "Type,Value\nInteger,0\nFloat,1.0\nDateTime,2001-02-04\n"
- parsed = CSV.parse(source, headers: true, converters: :all)
- parsed.map {|row| row['Value'].class} # => [Integer, Float, DateTime]
-
-===== Recipe: Convert Fields to Other Objects
-
-Define a custom field converter to convert \String fields into other objects.
-This example defines and uses a custom field converter
-that converts each column-1 value to a \Rational object:
- rational_converter = proc do |field, field_context|
- field_context.index == 1 ? field.to_r : field
- end
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, converters: rational_converter)
- parsed.map {|row| row['Value'].class} # => [Rational, Rational, Rational]
-
-==== Recipe: Filter Field Strings
-
-Define a custom field converter to modify \String fields.
-This example defines and uses a custom field converter
-that strips whitespace from each field value:
- strip_converter = proc {|field| field.strip }
- source = "Name,Value\n foo , 0 \n bar , 1 \n baz , 2 \n"
- parsed = CSV.parse(source, headers: true, converters: strip_converter)
- parsed['Name'] # => ["foo", "bar", "baz"]
- parsed['Value'] # => ["0", "1", "2"]
-
-==== Recipe: Register Field Converters
-
-Register a custom field converter, assigning it a name;
-then refer to the converter by its name:
- rational_converter = proc do |field, field_context|
- field_context.index == 1 ? field.to_r : field
- end
- CSV::Converters[:rational] = rational_converter
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, converters: :rational)
- parsed['Value'] # => [(0/1), (1/1), (2/1)]
-
-==== Using Multiple Field Converters
-
-You can use multiple field converters in either of these ways:
-- Specify converters in option +:converters+.
-- Specify converters in a custom converter list.
-
-===== Recipe: Specify Multiple Field Converters in Option +:converters+
-
-Apply multiple field converters by specifying them in option +:converters+:
- source = "Name,Value\nfoo,0\nbar,1.0\nbaz,2.0\n"
- parsed = CSV.parse(source, headers: true, converters: [:integer, :float])
- parsed['Value'] # => [0, 1.0, 2.0]
-
-===== Recipe: Specify Multiple Field Converters in a Custom Converter List
-
-Apply multiple field converters by defining and registering a custom converter list:
- strip_converter = proc {|field| field.strip }
- CSV::Converters[:strip] = strip_converter
- CSV::Converters[:my_converters] = [:integer, :float, :strip]
- source = "Name,Value\n foo , 0 \n bar , 1.0 \n baz , 2.0 \n"
- parsed = CSV.parse(source, headers: true, converters: :my_converters)
- parsed['Name'] # => ["foo", "bar", "baz"]
- parsed['Value'] # => [0, 1.0, 2.0]
-
-=== Converting Headers
-
-You can use header converters to modify parsed \String headers.
-
-Built-in header converters include:
-- +:symbol+: converts \String header to \Symbol.
-- +:downcase+: converts \String header to lowercase.
-
-You can also define header converters to otherwise modify header \Strings.
-
-==== Recipe: Convert Headers to Lowercase
-
-Convert headers to lowercase using built-in converter +:downcase+:
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, header_converters: :downcase)
- parsed.headers # => ["name", "value"]
-
-==== Recipe: Convert Headers to Symbols
-
-Convert headers to downcased Symbols using built-in converter +:symbol+:
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, header_converters: :symbol)
- parsed.headers # => [:name, :value]
- parsed.headers.map {|header| header.class} # => [Symbol, Symbol]
-
-==== Recipe: Filter Header Strings
-
-Define a custom header converter to modify \String fields.
-This example defines and uses a custom header converter
-that capitalizes each header \String:
- capitalize_converter = proc {|header| header.capitalize }
- source = "NAME,VALUE\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, header_converters: capitalize_converter)
- parsed.headers # => ["Name", "Value"]
-
-==== Recipe: Register Header Converters
-
-Register a custom header converter, assigning it a name;
-then refer to the converter by its name:
- capitalize_converter = proc {|header| header.capitalize }
- CSV::HeaderConverters[:capitalize] = capitalize_converter
- source = "NAME,VALUE\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, headers: true, header_converters: :capitalize)
- parsed.headers # => ["Name", "Value"]
-
-==== Using Multiple Header Converters
-
-You can use multiple header converters in either of these ways:
-- Specify header converters in option +:header_converters+.
-- Specify header converters in a custom header converter list.
-
-===== Recipe: Specify Multiple Header Converters in Option :header_converters
-
-Apply multiple header converters by specifying them in option +:header_converters+:
- source = "Name,Value\nfoo,0\nbar,1.0\nbaz,2.0\n"
- parsed = CSV.parse(source, headers: true, header_converters: [:downcase, :symbol])
- parsed.headers # => [:name, :value]
-
-===== Recipe: Specify Multiple Header Converters in a Custom Header Converter List
-
-Apply multiple header converters by defining and registering a custom header converter list:
- CSV::HeaderConverters[:my_header_converters] = [:symbol, :downcase]
- source = "NAME,VALUE\nfoo,0\nbar,1.0\nbaz,2.0\n"
- parsed = CSV.parse(source, headers: true, header_converters: :my_header_converters)
- parsed.headers # => [:name, :value]
-
-=== Diagnostics
-
-==== Recipe: Capture Unconverted Fields
-
-To capture unconverted field values, use option +:unconverted_fields+:
- source = "Name,Value\nfoo,0\nbar,1\nbaz,2\n"
- parsed = CSV.parse(source, converters: :integer, unconverted_fields: true)
- parsed # => [["Name", "Value"], ["foo", 0], ["bar", 1], ["baz", 2]]
- parsed.each {|row| p row.unconverted_fields }
-Output:
- ["Name", "Value"]
- ["foo", "0"]
- ["bar", "1"]
- ["baz", "2"]
-
-==== Recipe: Capture Field Info
-
-To capture field info in a custom converter, accept two block arguments.
-The first is the field value; the second is a +CSV::FieldInfo+ object:
- strip_converter = proc {|field, field_info| p field_info; field.strip }
- source = " foo , 0 \n bar , 1 \n baz , 2 \n"
- parsed = CSV.parse(source, converters: strip_converter)
- parsed # => [["foo", "0"], ["bar", "1"], ["baz", "2"]]
-Output:
- #<struct CSV::FieldInfo index=0, line=1, header=nil>
- #<struct CSV::FieldInfo index=1, line=1, header=nil>
- #<struct CSV::FieldInfo index=0, line=2, header=nil>
- #<struct CSV::FieldInfo index=1, line=2, header=nil>
- #<struct CSV::FieldInfo index=0, line=3, header=nil>
- #<struct CSV::FieldInfo index=1, line=3, header=nil>
diff --git a/doc/csv/recipes/recipes.rdoc b/doc/csv/recipes/recipes.rdoc
deleted file mode 100644
index 9bf7885b1e..0000000000
--- a/doc/csv/recipes/recipes.rdoc
+++ /dev/null
@@ -1,6 +0,0 @@
-== Recipes for \CSV
-
-The recipes are specific code examples for specific tasks. See:
-- {Recipes for Parsing CSV}[./parsing_rdoc.html]
-- {Recipes for Generating CSV}[./generating_rdoc.html]
-- {Recipes for Filtering CSV}[./filtering_rdoc.html]