Forgot to attach the patch I mentioned.
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
index bef89548fb9..95bb3ecf91e 100644
--- a/.github/pull_request_template.md
+++ b/.github/pull_request_template.md
@@ -33,7 +33,7 @@ In many cases, this will be as simple as modifying one `.test` and one `.result`
Without automated tests, future regressions in the expected behavior can't be automatically detected and verified.
-->
-If the changes are not amenable to automated testing, please explain why not and carefully describe how to test manually.
+If the changes are not amendable to automated testing, please explain why not and carefully describe how to test manually.
<!--
Tick one of the following boxes [x] to help us understand if the base branch for the PR is correct.
diff --git a/Docs/optimizer_costs.txt b/Docs/optimizer_costs.txt
index 8518728a43e..9929cb06c22 100644
--- a/Docs/optimizer_costs.txt
+++ b/Docs/optimizer_costs.txt
@@ -63,7 +63,7 @@ ROW_COPY_COST and KEY_COPY_COST
===============================
Regarding ROW_COPY_COST:
-When calulating cost of fetching a row, we have two alternativ cost
+When calculating cost of fetching a row, we have two alternative cost
parts (in addition to other costs):
scanning: rows * (ROW_NEXT_FIND_COST + ROW_COPY_COST)
rnd_pos: rows * (ROW_LOOKUP_COST + ROW_COPY_COST)
@@ -76,7 +76,7 @@ a table with 1 field). Because of this, I prefer to keep ROW_COPY_COST
around for now.
Regarding KEY_COPY_COST:
-When calulating cost of fetching a key we have as part of the cost:
+When calculating cost of fetching a key we have as part of the cost:
keyread_time: rows * KEY_COPY_COST + ranges * KEY_LOOKUP_COST +
(rows-ranges) * KEY_NEXT_FIND_COST
key_scan_time: rows * (KEY_NEXT_FIND_COST + KEY_COPY_COST)
diff --git a/mysql-test/include/execute_with_statistics.inc b/mysql-test/include/execute_with_statistics.inc
index c2305fe5247..2a926211409 100644
--- a/mysql-test/include/execute_with_statistics.inc
+++ b/mysql-test/include/execute_with_statistics.inc
@@ -7,7 +7,7 @@
# optimizer and total number of 'Handler_read%' when the
# query was executed.
# Intended usage is to verify that there are not regressions
-# in either calculated or actuall cost for $query.
+# in either calculated or actual cost for $query.
#
# USAGE
#
diff --git a/mysql-test/main/host_cache_size_functionality.test b/mysql-test/main/host_cache_size_functionality.test
index f37b2ab8c9e..bdb066d216d 100644
--- a/mysql-test/main/host_cache_size_functionality.test
+++ b/mysql-test/main/host_cache_size_functionality.test
@@ -15,7 +15,7 @@
# * Value Check #
# * Scope Check #
# * Functionality Check #
-# * Accessability Check #
+# * Accessebility Check #
# #
# This test does not perform the crash recovery on this variable #
# For crash recovery test on default change please run the ibtest #
diff --git a/mysql-test/main/insert_select.test b/mysql-test/main/insert_select.test
index 0e9bd05a93e..9c0d91ea016 100644
--- a/mysql-test/main/insert_select.test
+++ b/mysql-test/main/insert_select.test
@@ -176,7 +176,7 @@ select * from t2;
drop table t1, t2;
#
# BUGS #9728 - 'Decreased functionality in "on duplicate key update"'
-# #8147 - 'a column proclaimed ambigous in INSERT ... SELECT .. ON
+# #8147 - 'a column proclaimed ambiguous in INSERT ... SELECT .. ON
# DUPLICATE'
#
create table t1 (a int unique);
diff --git a/mysql-test/suite/perfschema/r/bad_option.result b/mysql-test/suite/perfschema/r/bad_option.result
index b14dad9600e..eb86d5aeffd 100644
--- a/mysql-test/suite/perfschema/r/bad_option.result
+++ b/mysql-test/suite/perfschema/r/bad_option.result
@@ -1,7 +1,7 @@
FOUND 1 /\[ERROR\].*unknown variable 'performance-schema-enabled=maybe'/ in my_restart.err
# Server start with invalid startup option value 'performance-schema-enabled=maybe' : pass
FOUND 1 /\[ERROR\].*unknown variable 'performance-schema-max_=12'/ in my_restart.err
-# Server start with ambigous startup option 'performance-schema-max_=12' : pass
+# Server start with ambiguous startup option 'performance-schema-max_=12' : pass
FOUND 1 /\[ERROR\].*unknown option '--performance-schema-unknown_99'/ in my_restart.err
# Server start with invalid startup option '--performance-schema-unknown_99' : pass
FOUND 1 /Can.t change dir to .*bad_option_h_param/ in my_restart.err
diff --git a/mysql-test/suite/perfschema/t/bad_option.test b/mysql-test/suite/perfschema/t/bad_option.test
index 3eee669bdde..2e7c1f96215 100644
--- a/mysql-test/suite/perfschema/t/bad_option.test
+++ b/mysql-test/suite/perfschema/t/bad_option.test
@@ -32,7 +32,7 @@ let SEARCH_PATTERN= \[ERROR\].*unknown variable 'performance-schema-enabled=mayb
# [ERROR] unknown variable 'performance-schema-max_=12'
let SEARCH_PATTERN= \[ERROR\].*unknown variable 'performance-schema-max_=12';
--source include/search_pattern_in_file.inc
---echo # Server start with ambigous startup option 'performance-schema-max_=12' : pass
+--echo # Server start with ambiguous startup option 'performance-schema-max_=12' : pass
# The important points is here:
# 1. There is no option 'performance-schema-max_' or 'performance-schema-max-' at all.
# 2. But we have many options where the name starts exact with this pattern.
diff --git a/mysys/my_access.c b/mysys/my_access.c
index bd722da1e59..9eac113d859 100644
--- a/mysys/my_access.c
+++ b/mysys/my_access.c
@@ -20,7 +20,7 @@
#ifdef _WIN32
/*
- Check a file or path for accessability.
+ Check a file or path for accessebility.
SYNOPSIS
file_access()
diff --git a/mysys/my_thr_init.c b/mysys/my_thr_init.c
index f40dab43fa8..6fca7371866 100644
--- a/mysys/my_thr_init.c
+++ b/mysys/my_thr_init.c
@@ -15,7 +15,7 @@
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1335 USA */
/*
- Functions to handle initializating and allocationg of all mysys & debug
+ Functions to handle initializating and allocating of all mysys & debug
thread variables.
*/
diff --git a/mysys/string.c b/mysys/string.c
index 91e4306ced4..6720f3fd4ef 100644
--- a/mysys/string.c
+++ b/mysys/string.c
@@ -133,7 +133,7 @@ my_bool dynstr_trunc(DYNAMIC_STRING *str, size_t n)
to specified DYNAMIC_STRING. This function is especially useful when
building strings to be executed with the system() function.
- @param str Dynamic String which will have addtional strings appended.
+ @param str Dynamic String which will have additional strings appended.
@param append String to be appended.
@param ... Optional. Additional string(s) to be appended.
diff --git a/mysys/tree.c b/mysys/tree.c
index db0442fa827..cc843b3b97f 100644
--- a/mysys/tree.c
+++ b/mysys/tree.c
@@ -16,7 +16,7 @@
/*
Code for handling red-black (balanced) binary trees.
- key in tree is allocated accrding to following:
+ key in tree is allocated according to following:
1) If size < 0 then tree will not allocate keys and only a pointer to
each key is saved in tree.
diff --git a/scripts/mytop.sh b/scripts/mytop.sh
index e5d926ef616..7a6182154a3 100644
--- a/scripts/mytop.sh
+++ b/scripts/mytop.sh
@@ -2123,7 +2123,7 @@ modules.
=head2 Optional Color Support
-In additon, if you want a color B<mytop> (recommended), install
+In addition, if you want a color B<mytop> (recommended), install
Term::ANSIColor from the CPAN:
http://search.cpan.org/search?dist=ANSIColor
diff --git a/sql/field.h b/sql/field.h
index bf99eb2820a..f79cd06ac75 100644
--- a/sql/field.h
+++ b/sql/field.h
@@ -5382,7 +5382,7 @@ class Column_definition: public Sql_alloc,
bool explicitly_nullable;
/*
- This is additinal data provided for any computed(virtual) field.
+ This is additional data provided for any computed(virtual) field.
In particular it includes a pointer to the item by which this field
can be computed from other fields.
*/
diff --git a/sql/handler.cc b/sql/handler.cc
index 9ca2fee591c..c77b852f004 100644
--- a/sql/handler.cc
+++ b/sql/handler.cc
@@ -5379,7 +5379,7 @@ bool non_existing_table_error(int error)
@retval
HA_ADMIN_NEEDS_DATA_CONVERSION
Table has structures requiring
- ALTER TABLE FORCE, algortithm=COPY to
+ ALTER TABLE FORCE, algorithm=COPY to
recreate data.
@retval
HA_ADMIN_NOT_IMPLEMENTED
diff --git a/sql/item.h b/sql/item.h
index 73d0d16f873..f6006da94a5 100644
--- a/sql/item.h
+++ b/sql/item.h
@@ -2864,7 +2864,7 @@ bool cmp_items(Item *a, Item *b);
/**
- Array of items, e.g. function or aggerate function arguments.
+ Array of items, e.g. function or aggregate function arguments.
*/
class Item_args
{
diff --git a/sql/item_cmpfunc.cc b/sql/item_cmpfunc.cc
index c244780835f..10d31c5157a 100644
--- a/sql/item_cmpfunc.cc
+++ b/sql/item_cmpfunc.cc
@@ -2751,7 +2751,7 @@ Item_func_nullif::fix_length_and_dec(THD *thd)
l_expr needs a special treatment, as it's referenced by both
args[0] and args[2] initially.
- args[2] is used to return the value. Afrer all transformations
+ args[2] is used to return the value. After all transformations
(e.g. in fix_length_and_dec(), equal field propagation, etc)
args[2] points to a an Item which preserves the exact data type and
attributes (e.g. collation) of the original l_expr.
diff --git a/sql/item_func.h b/sql/item_func.h
index d2da02fc39e..7d1dfc2d58b 100644
--- a/sql/item_func.h
+++ b/sql/item_func.h
@@ -2244,7 +2244,7 @@ class Item_func_rownum final :public Item_longlong_func
{
/*
This points to a variable that contains the number of rows
- accpted so far in the result set
+ accepted so far in the result set
*/
ha_rows *accepted_rows;
SELECT_LEX *select;
diff --git a/sql/item_sum.h b/sql/item_sum.h
index 62a5fd38fac..cb81bf093ef 100644
--- a/sql/item_sum.h
+++ b/sql/item_sum.h
@@ -69,7 +69,7 @@ class Aggregator : public Sql_alloc
/**
Called when we need to wipe out all the data from the aggregator :
- all the values acumulated and all the state.
+ all the values accumulated and all the state.
Cleans up the internal structures and resets them to their initial state.
*/
virtual void clear() = 0;
diff --git a/sql/json_schema.cc b/sql/json_schema.cc
index c5dfdec409a..6758a160aca 100644
--- a/sql/json_schema.cc
+++ b/sql/json_schema.cc
@@ -1847,7 +1847,7 @@ bool Json_schema_property_names::handle_keyword(THD *thd, json_engine_t *je,
}
/*
- additiona_items, additional_properties, unevaluated_items,
+ additional_items, additional_properties, unevaluated_items,
unevaluated_properties are all going to be schemas
(basically of object type). So they all can be handled
just like any other schema.
diff --git a/sql/json_table.cc b/sql/json_table.cc
index 905ad1ac303..5762ba5d781 100644
--- a/sql/json_table.cc
+++ b/sql/json_table.cc
@@ -910,7 +910,7 @@ int Json_table_column::set(THD *thd, enum_type ctype, const LEX_CSTRING &path,
/*
This is done so the ::print function can just print the path string.
Can be removed if we redo that function to print the path using it's
- anctual content. Not sure though if we should.
+ actual content. Not sure though if we should.
*/
m_path.s.c_str= (const uchar *) path.str;
diff --git a/sql/log_event.cc b/sql/log_event.cc
index 424fa80fb00..be20621888b 100644
--- a/sql/log_event.cc
+++ b/sql/log_event.cc
@@ -2336,7 +2336,7 @@ Format_description_log_event::is_version_before_checksum(const master_version_sp
@return the version-safe checksum alg descriptor where zero
designates no checksum, 255 - the orginator is
- checksum-unaware (effectively no checksum) and the actuall
+ checksum-unaware (effectively no checksum) and the actual
[1-254] range alg descriptor.
*/
enum_binlog_checksum_alg get_checksum_alg(const uchar *buf, ulong len)
diff --git a/sql/mysqld.cc b/sql/mysqld.cc
index ce75a02c51d..e0e81ec6f27 100644
--- a/sql/mysqld.cc
+++ b/sql/mysqld.cc
@@ -9345,7 +9345,7 @@ void refresh_global_status()
*/
reset_status_vars();
/*
- Reset accoumulated thread's status variables.
+ Reset accumulated thread's status variables.
These are the variables in 'status_vars[]' with the prefix _STATUS.
*/
bzero(&global_status_var, clear_for_flush_status);
diff --git a/sql/mysqld.h b/sql/mysqld.h
index 3cac9a6630a..491fabf39c7 100644
--- a/sql/mysqld.h
+++ b/sql/mysqld.h
@@ -857,7 +857,7 @@ enum enum_query_type
QT_SHOW_SELECT_NUMBER= (1<<10),
/// Do not print database name or table name in the identifiers (even if
- /// this means the printout will be ambigous). It is assumed that the caller
+ /// this means the printout will be ambiguous). It is assumed that the caller
/// passing this flag knows what they are doing.
QT_ITEM_IDENT_DISABLE_DB_TABLE_NAMES= (1 <<11),
diff --git a/sql/opt_range.cc b/sql/opt_range.cc
index bc8519cd7f6..a6bbecfa084 100644
--- a/sql/opt_range.cc
+++ b/sql/opt_range.cc
@@ -12318,7 +12318,7 @@ ha_rows check_quick_select(PARAM *param, uint idx, ha_rows limit,
estimates may be slightly out of sync.
We cannot do this easily in the above multi_range_read_info_const()
- call as then we would need to have similar adjustmends done
+ call as then we would need to have similar adjustments done
in the partitioning engine.
*/
rows= MY_MAX(table_records, 1);
diff --git a/sql/opt_subselect.cc b/sql/opt_subselect.cc
index 4ad0540a3d6..7211517998f 100644
--- a/sql/opt_subselect.cc
+++ b/sql/opt_subselect.cc
@@ -3149,7 +3149,7 @@ void optimize_semi_joins(JOIN *join, table_map remaining_tables, uint idx,
Update JOIN's semi-join optimization state after the join tab new_tab
has been added into the join prefix.
- @seealso restore_prev_sj_state() does the reverse actoion
+ @seealso restore_prev_sj_state() does the reverse action
*/
void update_sj_state(JOIN *join, const JOIN_TAB *new_tab,
diff --git a/sql/protocol.cc b/sql/protocol.cc
index 0af8598cc59..dfbea0df436 100644
--- a/sql/protocol.cc
+++ b/sql/protocol.cc
@@ -64,7 +64,7 @@ bool Protocol_binary::net_store_data(const uchar *from, size_t length)
net_store_data_cs() - extended version with character set conversion.
It is optimized for short strings whose length after
- conversion is garanteed to be less than 251, which accupies
+ conversion is garanteed to be less than 251, which occupies
exactly one byte to store length. It allows not to use
the "convert" member as a temporary buffer, conversion
is done directly to the "packet" member.
@@ -482,7 +482,7 @@ bool Protocol::net_send_error_packet(THD *thd, uint sql_errno, const char *err,
We keep a separate version for that range because it's widely used in
libmysql.
- uint is used as agrument type because of MySQL type conventions:
+ uint is used as argument type because of MySQL type conventions:
- uint for 0..65536
- ulong for 0..4294967296
- ulonglong for bigger numbers.
diff --git a/sql/socketpair.c b/sql/socketpair.c
index ef89fa0446b..d913475a93b 100644
--- a/sql/socketpair.c
+++ b/sql/socketpair.c
@@ -26,7 +26,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* Changes:
- * 2023-12-25 Addopted for MariaDB usage
+ * 2023-12-25 Adopted for MariaDB usage
* 2014-02-12: merge David Woodhouse, Ger Hobbelt improvements
* git.infradead.org/users/dwmw2/openconnect.git/commitdiff/bdeefa54
* github.com/GerHobbelt/selectable-socketpair
diff --git a/sql/sql_base.cc b/sql/sql_base.cc
index 61621960aa8..73482cb22f5 100644
--- a/sql/sql_base.cc
+++ b/sql/sql_base.cc
@@ -8077,7 +8077,7 @@ bool setup_fields(THD *thd, Ref_ptr_array ref_pointer_array,
There is other way to solve problem: fill array with pointers to list,
but it will be slower.
- TODO: remove it when (if) we made one list for allfields and
+ TODO: remove it when (if) we made one list for all fields and
ref_pointer_array
*/
if (!ref_pointer_array.is_null())
diff --git a/sql/sql_cache.cc b/sql/sql_cache.cc
index 45f7d4a837b..08e2eb05b9e 100644
--- a/sql/sql_cache.cc
+++ b/sql/sql_cache.cc
@@ -4896,7 +4896,7 @@ my_bool Query_cache::check_integrity(bool locked)
DBUG_PRINT("qcache", ("block %p, type %u...",
block, (uint) block->type));
- // Check allignment
+ // Check alignment
if ((((size_t)block) % ALIGN_SIZE(1)) !=
(((size_t)first_block) % ALIGN_SIZE(1)))
{
diff --git a/sql/sql_class.h b/sql/sql_class.h
index 6c73bc58638..5aee561bc7d 100644
--- a/sql/sql_class.h
+++ b/sql/sql_class.h
@@ -4878,7 +4878,7 @@ class THD: public THD_count, /* this must be first */
/*
Mark thread to be killed, with optional error number and string.
- string is not released, so it has to be allocted on thd mem_root
+ string is not released, so it has to be allocated on thd mem_root
or be a global string
Ensure that we don't replace a kill with a lesser one. For example
diff --git a/sql/sql_connect.cc b/sql/sql_connect.cc
index d048df2ca70..a0df373f2bb 100644
--- a/sql/sql_connect.cc
+++ b/sql/sql_connect.cc
@@ -800,7 +800,7 @@ bool thd_init_client_charset(THD *thd, uint cs_number)
b. preserve non-default collations as is
Perhaps eventually we should change (b) also to resolve non-default
- collations accoding to @@character_set_collations. Clients that used to
+ collations according to @@character_set_collations. Clients that used to
send a non-default collation ID in the handshake packet will have to set
@@character_set_collations instead.
*/
@@ -1495,7 +1495,7 @@ void CONNECT::close_and_delete(uint err)
/*
Close a connection with a possible error to the end user
- Alse deletes the connection object, like close_and_delete()
+ Else deletes the connection object, like close_and_delete()
*/
void CONNECT::close_with_error(uint sql_errno,
diff --git a/sql/sql_const.h b/sql/sql_const.h
index 58e4a27dff8..31072682747 100644
--- a/sql/sql_const.h
+++ b/sql/sql_const.h
@@ -211,7 +211,7 @@
/*
The lower bound of accepted rows when using filter.
- This is used to ensure that filters are not too agressive.
+ This is used to ensure that filters are not too aggressive.
*/
#define MIN_ROWS_AFTER_FILTERING 1.0
diff --git a/sql/sql_cte.cc b/sql/sql_cte.cc
index b3434b42eb2..85f8522f137 100644
--- a/sql/sql_cte.cc
+++ b/sql/sql_cte.cc
@@ -692,7 +692,7 @@ With_element::check_dependencies_in_with_clause(With_clause *with_clause,
/**
@brief
- Find mutually recursive with elements and check that they have ancors
+ Find mutually recursive with elements and check that they have anchors
@details
This method performs the following:
diff --git a/sql/sql_join_cache.cc b/sql/sql_join_cache.cc
index 3f012629d6b..c0e90d7e6e6 100644
--- a/sql/sql_join_cache.cc
+++ b/sql/sql_join_cache.cc
@@ -2645,7 +2645,7 @@ inline bool JOIN_CACHE::check_match(uchar *rec_ptr)
NOTES
The same implementation of the virtual method join_null_complements
- is used for BNL/BNLH/BKA/BKA join algorthm.
+ is used for BNL/BNLH/BKA/BKA join algorithm.
RETURN VALUE
return one of enum_nested_loop_state.
diff --git a/sql/sql_lifo_buffer.h b/sql/sql_lifo_buffer.h
index afe47b5f415..44b65175116 100644
--- a/sql/sql_lifo_buffer.h
+++ b/sql/sql_lifo_buffer.h
@@ -31,7 +31,7 @@ class Backward_lifo_buffer;
- The buffer contains fixed-size elements. The elements are either atomic
byte sequences or pairs of them.
- The buffer resides in the memory provided by the user. It is possible to
- = dynamically (ie. between write operations) add ajacent memory space to
+ = dynamically (ie. between write operations) add adjacent memory space to
the buffer
= dynamically remove unused space from the buffer.
The intent of this is to allow to have two buffers on adjacent memory
diff --git a/sql/sql_load.cc b/sql/sql_load.cc
index 5cd86906c0d..001942c104a 100644
--- a/sql/sql_load.cc
+++ b/sql/sql_load.cc
@@ -1080,7 +1080,7 @@ read_fixed_length(THD *thd, COPY_INFO &info, TABLE_LIST *table_list,
uchar save_chr;
if ((length=(uint) (read_info.row_end - pos)) > fixed_length)
length= fixed_length;
- save_chr= pos[length]; pos[length]= '\0'; // Safeguard aganst malloc
+ save_chr= pos[length]; pos[length]= '\0'; // Safeguard against malloc
dst->load_data_set_value(thd, (const char *) pos, length, &read_info);
pos[length]= save_chr;
if ((pos+= length) > read_info.row_end)
diff --git a/sql/sql_parse.cc b/sql/sql_parse.cc
index b18cf0a1076..67356e8a34a 100644
--- a/sql/sql_parse.cc
+++ b/sql/sql_parse.cc
@@ -2963,7 +2963,7 @@ lock_tables_open_and_lock_tables(THD *thd, TABLE_LIST *tables)
{
/*
Deadlock occurred during upgrade of metadata lock.
- Let us restart acquring and opening tables for LOCK TABLES.
+ Let us restart acquiring and opening tables for LOCK TABLES.
*/
close_tables_for_reopen(thd, &tables, mdl_savepoint, true);
if (thd->open_temporary_tables(tables))
diff --git a/sql/sql_partition.cc b/sql/sql_partition.cc
index 5d1cf53afc1..b7c4516f2ab 100644
--- a/sql/sql_partition.cc
+++ b/sql/sql_partition.cc
@@ -2456,7 +2456,7 @@ static int add_partition_values(String *str, partition_info *part_info,
/**
- Add 'KEY' word, with optional 'ALGORTIHM = N'.
+ Add 'KEY' word, with optional 'ALGORITHM = N'.
@param str String to write to.
@param part_info partition_info holding the used key_algorithm
diff --git a/sql/sql_plugin.cc b/sql/sql_plugin.cc
index 0c297db34be..43134e561c4 100644
--- a/sql/sql_plugin.cc
+++ b/sql/sql_plugin.cc
@@ -3824,7 +3824,7 @@ void plugin_opt_set_limits(struct my_option *options,
The set is stored in the pre-allocated static array supplied to the function.
The size of the array is calculated as (number_of_plugin_varaibles*2+3). The
- reason is that each option can have a prefix '--plugin-' in addtion to the
+ reason is that each option can have a prefix '--plugin-' in addition to the
shorter form '--<plugin-name>'. There is also space allocated for
terminating NULL pointers.
diff --git a/sql/sql_select.cc b/sql/sql_select.cc
index c14afdc4679..e097de0e865 100644
--- a/sql/sql_select.cc
+++ b/sql/sql_select.cc
@@ -5261,7 +5261,7 @@ find_partial_select_handler(THD *thd, SELECT_LEX *select_lex,
WHERE clause of the top level select
@param og_num total number of ORDER BY and GROUP BY clauses
arguments
- @param order linked list of ORDER BY agruments
+ @param order linked list of ORDER BY arguments
@param group linked list of GROUP BY arguments
@param having top level item of HAVING expression
@param proc_param list of PROCEDUREs
@@ -5681,7 +5681,7 @@ make_join_statistics(JOIN *join, List<TABLE_LIST> &tables_list,
{
/*
Information schema is slow and we don't know how many rows we will
- find. Be setting a moderate ammount of rows we are more likely
+ find. Be setting a moderate amount of rows we are more likely
to have it materialized if needed.
*/
table->file->stats.records= table->used_stat_records= 100;
@@ -8555,7 +8555,7 @@ const char* dbug_print_join_prefix(const POSITION *join_positions,
The function finds the best access path to table 's' from the passed
partial plan where an access path is the general term for any means to
- cacess the data in 's'. An access path may use either an index or a scan,
+ access the data in 's'. An access path may use either an index or a scan,
whichever is cheaper. The input partial plan is passed via the array
'join->positions' of length 'idx'. The chosen access method for 's' and its
cost are stored in 'join->positions[idx]'.
@@ -15701,7 +15701,7 @@ uint check_join_cache_usage(JOIN_TAB *tab,
The problem is, the temp.table is not filled (actually not even opened
properly) yet, and this doesn't let us call
handler->multi_range_read_info(). It is possible to come up with
- estimates, etc. without acessing the table, but it seems not to worth the
+ estimates, etc. without accessing the table, but it seems not to worth the
effort now.
*/
if (tab->table->pos_in_table_list->is_materialized_derived())
@@ -23822,7 +23822,7 @@ bool instantiate_tmp_table(TABLE *table, KEY *keyinfo,
@param end_records TRUE <=> all records were accumulated, send them further
@details
- This function accumulates records of the aggreagation operation for
+ This function accumulates records of the aggregation operation for
the node join_tab from the execution plan in a tmp table. To add a new
record the function calls join_tab->aggr->put_records.
When there is no more records to save, in this
@@ -31443,7 +31443,7 @@ static void print_table_array(THD *thd,
TABLE_LIST *curr= *tbl;
/*
- The "eliminated_tables &&" check guards againist the case of
+ The "eliminated_tables &&" check guards against the case of
printing the query for CREATE VIEW. We do that without having run
JOIN::optimize() and so will have nested_join->used_tables==0.
*/
diff --git a/sql/sql_select.h b/sql/sql_select.h
index 2925b5bc95c..eaf4d4aa031 100644
--- a/sql/sql_select.h
+++ b/sql/sql_select.h
@@ -1166,7 +1166,7 @@ class Pushdown_query;
@details
The result records are obtained on the put_record() call.
- The aggrgation process is determined by the write_func, it could be:
+ The aggregation process is determined by the write_func, it could be:
end_write Simply store all records in tmp table.
end_write_group Perform grouping using join->group_fields,
records are expected to be sorted.
diff --git a/sql/sql_string.cc b/sql/sql_string.cc
index 087e03dccab..6ae063c520f 100644
--- a/sql/sql_string.cc
+++ b/sql/sql_string.cc
@@ -64,7 +64,7 @@ bool Binary_string::real_alloc(size_t length)
null character is inserted at the appropriate position.
- If the String does not keep a private buffer on the heap, such a buffer
- will be allocated and the string copied accoring to its length, as found
+ will be allocated and the string copied according to its length, as found
in String::length().
For C compatibility, the new string buffer is null terminated if it was
diff --git a/sql/sql_type.cc b/sql/sql_type.cc
index 5c367c994bc..97d961c204b 100644
--- a/sql/sql_type.cc
+++ b/sql/sql_type.cc
@@ -1411,7 +1411,7 @@ Type_handler::odbc_literal_type_handler(const LEX_CSTRING *type_str)
TODO: type_handler_adjusted_to_max_octet_length() and string_type_handler()
provide very similar functionality, to properly choose between
- VARCHAR/VARBINARY vs TEXT/BLOB variations taking into accoung maximum
+ VARCHAR/VARBINARY vs TEXT/BLOB variations taking into account maximum
possible octet length.
We should probably get rid of either of them and use the same method
diff --git a/sql/sql_update.cc b/sql/sql_update.cc
index f847e0d3d9e..c50e5c1804b 100644
--- a/sql/sql_update.cc
+++ b/sql/sql_update.cc
@@ -254,7 +254,7 @@ static void prepare_record_for_error_message(int error, TABLE *table)
/*
Only duplicate key errors print the key value.
- If storage engine does always read all columns, we have the value alraedy.
+ If storage engine does always read all columns, we have the value already.
*/
if ((error != HA_ERR_FOUND_DUPP_KEY) ||
!(table->file->ha_table_flags() & HA_PARTIAL_COLUMN_READ))
diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
index 59f3a714f6a..d945dae9f4a 100644
--- a/sql/sql_yacc.yy
+++ b/sql/sql_yacc.yy
@@ -13245,7 +13245,7 @@ procedure_clause:
/*
PROCEDURE CLAUSE cannot handle subquery as one of its parameter,
so disallow any subqueries further.
- Alow subqueries back once the parameters are reduced.
+ Allow subqueries back once the parameters are reduced.
*/
Lex->clause_that_disallows_subselect= "PROCEDURE";
Select->options|= OPTION_PROCEDURE_CLAUSE;
@@ -14659,7 +14659,7 @@ show_param:
}
| describe_command opt_format_json FOR_SYM expr
/*
- The alternaltive syntax for this command is MySQL-compatible
+ The alternative syntax for this command is MySQL-compatible
EXPLAIN FOR CONNECTION
*/
{
diff --git a/sql/table.cc b/sql/table.cc
index 76f706d849c..328d9591ac0 100644
--- a/sql/table.cc
+++ b/sql/table.cc
@@ -6904,7 +6904,7 @@ TABLE_LIST *TABLE_LIST::last_leaf_for_name_resolution()
SYNOPSIS
register_want_access()
- want_access Acess which we require
+ want_access Access which we require
*/
void TABLE_LIST::register_want_access(privilege_t want_access)
diff --git a/sql/table_cache.cc b/sql/table_cache.cc
index b804a3e0627..ad477b0b84e 100644
--- a/sql/table_cache.cc
+++ b/sql/table_cache.cc
@@ -159,7 +159,7 @@ struct Table_cache_instance
/**
Lock table cache mutex and check contention.
- Instance is considered contested if more than 20% of mutex acquisiotions
+ Instance is considered contested if more than 20% of mutex acquisitions
can't be served immediately. Up to 100 000 probes may be performed to avoid
instance activation on short sporadic peaks. 100 000 is estimated maximum
number of queries one instance can serve in one second.
@@ -168,8 +168,8 @@ struct Table_cache_instance
system, that is expected number of instances is activated within reasonable
warmup time. It may have to be adjusted for other systems.
- Only TABLE object acquistion is instrumented. We intentionally avoid this
- overhead on TABLE object release. All other table cache mutex acquistions
+ Only TABLE object acquisition is instrumented. We intentionally avoid this
+ overhead on TABLE object release. All other table cache mutex acquisitions
are considered out of hot path and are not instrumented either.
*/
void lock_and_check_contention(uint32_t n_instances, uint32_t instance)
diff --git a/sql/wsrep_sst.cc b/sql/wsrep_sst.cc
index 7097853e61b..2d8b1b5168b 100644
--- a/sql/wsrep_sst.cc
+++ b/sql/wsrep_sst.cc
@@ -1414,7 +1414,7 @@ std::string wsrep_sst_prepare()
if (is_ipv6)
{
- /* wsrep_sst_*.sh scripts requite ipv6 addreses to be in square breackets */
+ /* wsrep_sst_*.sh scripts requite ipv6 addresses to be in square breackets */
ip_buf[0] = '[';
/* the length (len) already includes the null byte: */
memcpy(ip_buf + 1, address, len - 1);
diff --git a/storage/connect/ha_connect.cc b/storage/connect/ha_connect.cc
index 3757d0d1c03..a20f877f93d 100644
--- a/storage/connect/ha_connect.cc
+++ b/storage/connect/ha_connect.cc
@@ -326,7 +326,7 @@ static MYSQL_THDVAR_BOOL(cond_push, PLUGIN_VAR_RQCMDARG,
Temporary file usage:
no: Not using temporary file
auto: Using temporary file when needed
- yes: Allways using temporary file
+ yes: Always using temporary file
force: Force using temporary file (no MAP)
test: Reserved
*/
diff --git a/storage/connect/plgdbsem.h b/storage/connect/plgdbsem.h
index 4371f90a21d..9d90d9742ae 100644
--- a/storage/connect/plgdbsem.h
+++ b/storage/connect/plgdbsem.h
@@ -486,7 +486,7 @@ typedef struct _format { /* Format descriptor block */
/***********************************************************************/
typedef struct _tabptr { /* start=P1 */
struct _tabptr *Next;
- int Num; /* alignement */
+ int Num; /* alignment */
void *Old[50];
void *New[50]; /* old and new values of copied ptrs */
} TABPTR, *PTABPTR;
diff --git a/storage/connect/tabwmi.cpp b/storage/connect/tabwmi.cpp
index 1cd46a7442c..cef46bdf61e 100644
--- a/storage/connect/tabwmi.cpp
+++ b/storage/connect/tabwmi.cpp
@@ -599,7 +599,7 @@ int TDBWMI::GetMaxSize(PGLOBAL g)
/*******************************************************************/
/* Loop enumerating to get the count. This is prone to last a */
/* very long time for some classes such as DataFile, this is why */
- /* we just return an estimated value that will be ajusted later. */
+ /* we just return an estimated value that will be adjusted later. */
/*******************************************************************/
MaxSize = Ems;
#if 0
diff --git a/storage/connect/value.cpp b/storage/connect/value.cpp
index e64c4813046..516dace4078 100644
--- a/storage/connect/value.cpp
+++ b/storage/connect/value.cpp
@@ -2568,7 +2568,7 @@ bool DTVAL::MakeDate(PGLOBAL g, int *val, int nval)
case 1:
// If mktime handles apparently correctly large or negative
// day values, it is not the same for months. Therefore we
- // do the ajustment here, thus mktime has not to do it.
+ // do the adjustment here, thus mktime has not to do it.
if (n > 0) {
m = (n - 1) % 12;
n = (n - 1) / 12;
diff --git a/storage/heap/hp_create.c b/storage/heap/hp_create.c
index f35e8e3fac9..b1caf001ef1 100644
--- a/storage/heap/hp_create.c
+++ b/storage/heap/hp_create.c
@@ -25,7 +25,7 @@ static void init_block(HP_BLOCK *block, size_t reclength, ulong min_records,
/*
In how many parts are we going to do allocations of memory and indexes
If we assigne 1M to the heap table memory, we will allocate roughly
- (1M/16) bytes per allocaiton
+ (1M/16) bytes per allocation
*/
static const int heap_allocation_parts= 16;
@@ -361,7 +361,7 @@ static void init_block(HP_BLOCK *block, size_t reclength, ulong min_records,
block->records_in_block= records_in_block;
block->recbuffer= recbuffer;
block->last_allocated= 0L;
- /* All alloctions are done with this size, if possible */
+ /* All allocations are done with this size, if possible */
block->alloc_size= alloc_size - MALLOC_OVERHEAD;
for (i= 0; i <= HP_MAX_LEVELS; i++)
diff --git a/storage/innobase/btr/btr0bulk.cc b/storage/innobase/btr/btr0bulk.cc
index 6b385997f53..a021dbabf46 100644
--- a/storage/innobase/btr/btr0bulk.cc
+++ b/storage/innobase/btr/btr0bulk.cc
@@ -667,7 +667,7 @@ PageBulk::copyOut(
infimum->r1->r2->r3->r4->r5->supremum, and r3 is the split rec.
after copyOut, we have 2 records on the page:
- infimum->r1->r2->supremum. slot ajustment is not done. */
+ infimum->r1->r2->supremum. slot adjustment is not done. */
rec_t *rec = page_get_infimum_rec(m_page);
ulint n;
diff --git a/storage/innobase/btr/btr0cur.cc b/storage/innobase/btr/btr0cur.cc
index d34656abc2b..50453ba435d 100644
--- a/storage/innobase/btr/btr0cur.cc
+++ b/storage/innobase/btr/btr0cur.cc
@@ -5639,7 +5639,7 @@ static void btr_blob_free(buf_block_t *block, bool all, mtr_t *mtr)
if (!buf_LRU_free_page(&block->page, all) && all && block->page.zip.data)
/* Attempt to deallocate the redundant copy of the uncompressed page
- if the whole ROW_FORMAT=COMPRESSED block cannot be deallocted. */
+ if the whole ROW_FORMAT=COMPRESSED block cannot be deallocated. */
buf_LRU_free_page(&block->page, false);
mysql_mutex_unlock(&buf_pool.mutex);
diff --git a/storage/innobase/buf/buf0buf.cc b/storage/innobase/buf/buf0buf.cc
index 5d61b5f55c9..96604efad39 100644
--- a/storage/innobase/buf/buf0buf.cc
+++ b/storage/innobase/buf/buf0buf.cc
@@ -728,7 +728,7 @@ buf_page_is_corrupted(bool check_lsn, const byte *read_buf, uint32_t fsp_flags)
start and the end of the page. */
/* Since innodb_checksum_algorithm is not strict_* allow
- any of the algos to match for the old field */
+ any of the algorithms to match for the old field */
if (checksum_field2
!= mach_read_from_4(read_buf + FIL_PAGE_LSN)
diff --git a/storage/innobase/buf/buf0flu.cc b/storage/innobase/buf/buf0flu.cc
index 5e0ebf63150..23352c4856e 100644
--- a/storage/innobase/buf/buf0flu.cc
+++ b/storage/innobase/buf/buf0flu.cc
@@ -2220,7 +2220,7 @@ static void buf_flush_sync_for_checkpoint(lsn_t lsn) noexcept
mysql_mutex_unlock(&buf_pool.flush_list_mutex);
}
-/** Check if the adpative flushing threshold is recommended based on
+/** Check if the adaptive flushing threshold is recommended based on
redo log capacity filled threshold.
@param oldest_lsn buf_pool.get_oldest_modification()
@return true if adaptive flushing is recommended. */
diff --git a/storage/innobase/fts/fts0ast.cc b/storage/innobase/fts/fts0ast.cc
index 74d02d63817..dd8a47e992a 100644
--- a/storage/innobase/fts/fts0ast.cc
+++ b/storage/innobase/fts/fts0ast.cc
@@ -688,7 +688,7 @@ fts_ast_visit(
continue;
}
- /* Process leaf node accroding to its pass.*/
+ /* Process leaf node according to its pass.*/
if (oper == FTS_EXIST_SKIP
&& visit_pass == FTS_PASS_EXIST) {
error = visitor(FTS_EXIST, node, arg);
diff --git a/storage/innobase/gis/gis0rtree.cc b/storage/innobase/gis/gis0rtree.cc
index 2c7db6f5f5d..d0dcdcfc684 100644
--- a/storage/innobase/gis/gis0rtree.cc
+++ b/storage/innobase/gis/gis0rtree.cc
@@ -74,7 +74,7 @@ rtr_page_split_initialize_nodes(
n_recs = ulint(page_get_n_recs(page)) + 1;
/*We reserve 2 MBRs memory space for temp result of split
- algrithm. And plus the new mbr that need to insert, we
+ algorithm. And plus the new mbr that need to insert, we
need (n_recs + 3)*MBR size for storing all MBRs.*/
buf = static_cast<double*>(mem_heap_alloc(
heap, DATA_MBR_LEN * (n_recs + 3)
diff --git a/storage/innobase/include/lock0prdt.h b/storage/innobase/include/lock0prdt.h
index db8e33922c4..6803ff8f36a 100644
--- a/storage/innobase/include/lock0prdt.h
+++ b/storage/innobase/include/lock0prdt.h
@@ -109,7 +109,7 @@ lock_prdt_update_split(
const page_id_t page_id); /*!< in: page number */
/**************************************************************//**
-Ajust locks from an ancester page of Rtree on the appropriate level . */
+Adjust locks from an ancestor page of Rtree on the appropriate level . */
void
lock_prdt_update_parent(
/*====================*/
diff --git a/storage/innobase/include/trx0purge.h b/storage/innobase/include/trx0purge.h
index 21ec23817d2..e05e3b1d581 100644
--- a/storage/innobase/include/trx0purge.h
+++ b/storage/innobase/include/trx0purge.h
@@ -265,7 +265,7 @@ class purge_sys_t
return purge_queue.clone_container();
}
- /** Acquare purge_queue_mutex */
+ /** Acquire purge_queue_mutex */
void queue_lock() { mysql_mutex_lock(&pq_mutex); }
/** Release purge queue mutex */
diff --git a/storage/innobase/include/univ.i b/storage/innobase/include/univ.i
index 490f71653f7..3d11dd5e811 100644
--- a/storage/innobase/include/univ.i
+++ b/storage/innobase/include/univ.i
@@ -260,7 +260,7 @@ database name and table name. In addition, 14 bytes is added for:
#define MAX_FULL_NAME_LEN \
(MAX_TABLE_NAME_LEN + MAX_DATABASE_NAME_LEN + 14)
-/** Maximum length of the compression alogrithm string. Currently we support
+/** Maximum length of the compression algorithm string. Currently we support
only (NONE | ZLIB | LZ4). */
#define MAX_COMPRESSION_LEN 4
diff --git a/storage/innobase/include/ut0pool.h b/storage/innobase/include/ut0pool.h
index e5df50fa071..dbe290faf45 100644
--- a/storage/innobase/include/ut0pool.h
+++ b/storage/innobase/include/ut0pool.h
@@ -204,7 +204,7 @@ struct Pool {
/** Upper limit of used space */
Element* m_last;
- /** Priority queue ordered on the pointer addresse. */
+ /** Priority queue ordered on the pointer addresses. */
pqueue_t m_pqueue;
/** Lock strategy to use */
diff --git a/storage/innobase/lock/lock0lock.cc b/storage/innobase/lock/lock0lock.cc
index 1fc8b52e940..9d463480e55 100644
--- a/storage/innobase/lock/lock0lock.cc
+++ b/storage/innobase/lock/lock0lock.cc
@@ -4172,7 +4172,7 @@ void lock_table_resurrect(dict_table_t *table, trx_t *trx, lock_mode mode)
{
/* This is executed at server startup while no connections
- are alowed. Do not bother with lock elision. */
+ are allowed. Do not bother with lock elision. */
LockMutexGuard g{SRW_LOCK_CALL};
ut_ad(!lock_table_other_has_incompatible(trx, LOCK_WAIT, table, mode));
diff --git a/storage/innobase/log/log0sync.cc b/storage/innobase/log/log0sync.cc
index f6ca440efa8..8ba24143c17 100644
--- a/storage/innobase/log/log0sync.cc
+++ b/storage/innobase/log/log0sync.cc
@@ -52,7 +52,7 @@ d) Something else.
Make use of the waiter's lsn parameter, and only wakeup "right" waiting
threads.
-We chose d). Even if implementation is more complicated than alternatves
+We chose d). Even if implementation is more complicated than alternatives
due to the need to maintain list of waiters, it provides the best performance.
See group_commit_lock implementation for details.
diff --git a/storage/innobase/row/row0import.cc b/storage/innobase/row/row0import.cc
index 809a9128838..27c7576d9e1 100644
--- a/storage/innobase/row/row0import.cc
+++ b/storage/innobase/row/row0import.cc
@@ -1007,7 +1007,7 @@ class PageConverter : public AbstractCallback {
rec_t* rec,
const rec_offs* offsets) UNIV_NOTHROW;
- /** In the clustered index, adjist the BLOB pointers as needed.
+ /** In the clustered index, adjust the BLOB pointers as needed.
Also update the BLOB reference, write the new space id.
@param rec record to update
@param offsets column offsets for the record
diff --git a/storage/innobase/trx/trx0roll.cc b/storage/innobase/trx/trx0roll.cc
index 5c89bfb7c33..03900cb8a58 100644
--- a/storage/innobase/trx/trx0roll.cc
+++ b/storage/innobase/trx/trx0roll.cc
@@ -201,7 +201,7 @@ dberr_t trx_rollback_for_mysql(trx_t* trx)
case TRX_STATE_NOT_STARTED:
trx->will_lock = false;
ut_ad(trx->mysql_thd);
- /* Galera transaction abort can be invoked from MDL acquision
+ /* Galera transaction abort can be invoked from MDL acquisition
code, so trx->lock.was_chosen_as_deadlock_victim can be set
even if trx->state is TRX_STATE_NOT_STARTED. */
ut_ad(!(trx->lock.was_chosen_as_deadlock_victim & 1));
diff --git a/storage/maria/aria_pack.c b/storage/maria/aria_pack.c
index 43150d6e02c..8b0fb6969ff 100644
--- a/storage/maria/aria_pack.c
+++ b/storage/maria/aria_pack.c
@@ -1497,7 +1497,7 @@ test_space_compress(HUFF_COUNTS *huff_counts, my_off_t records,
min_pos= -2;
huff_counts->counts[(uint) ' ']=space_count;
- /* Test with allways space-count */
+ /* Test with always space-count */
new_length=huff_counts->bytes_packed+length_bits*records/8;
if (new_length+1 < min_pack)
{
diff --git a/storage/maria/ma_delete.c b/storage/maria/ma_delete.c
index 77ffb47d93c..349da9f904f 100644
--- a/storage/maria/ma_delete.c
+++ b/storage/maria/ma_delete.c
@@ -754,7 +754,7 @@ static int del(MARIA_HA *info, MARIA_KEY *key,
@brief Balances adjacent pages if underflow occours
@fn underflow()
- @param anc_buff Anchestor page data
+ @param anc_buff Ancestor page data
@param leaf_page Leaf page (page that underflowed)
@param leaf_page_link Pointer to pin information about leaf page
@param keypos Position after current key in anc_buff
diff --git a/storage/maria/ma_extra.c b/storage/maria/ma_extra.c
index 0709f71ce18..49b325a4af8 100644
--- a/storage/maria/ma_extra.c
+++ b/storage/maria/ma_extra.c
@@ -225,7 +225,7 @@ int maria_extra(MARIA_HA *info, enum ha_extra_function function,
info->read_record= share->read_record;
info->opt_flag&= ~(KEY_READ_USED | REMEMBER_OLD_POS);
break;
- case HA_EXTRA_NO_USER_CHANGE: /* Database is somehow locked agains changes */
+ case HA_EXTRA_NO_USER_CHANGE: /* Database is somehow locked against changes */
info->lock_type= F_EXTRA_LCK; /* Simulate as locked */
break;
case HA_EXTRA_WAIT_LOCK:
diff --git a/storage/maria/ma_sort.c b/storage/maria/ma_sort.c
index 97f22103e46..9363eb6dce9 100644
--- a/storage/maria/ma_sort.c
+++ b/storage/maria/ma_sort.c
@@ -942,7 +942,7 @@ static int merge_many_buff(MARIA_SORT_PARAM *info, ha_keys keys,
buffpek Where to read from
sort_length max length to read
RESULT
- > 0 Ammount of bytes read
+ > 0 Amount of bytes read
-1 Error
*/
diff --git a/storage/myisam/myisampack.c b/storage/myisam/myisampack.c
index 077507e897c..e4cb51a2250 100644
--- a/storage/myisam/myisampack.c
+++ b/storage/myisam/myisampack.c
@@ -1417,7 +1417,7 @@ test_space_compress(HUFF_COUNTS *huff_counts, my_off_t records,
min_pos= -2;
huff_counts->counts[(uint) ' ']=space_count;
- /* Test with allways space-count */
+ /* Test with always space-count */
new_length=huff_counts->bytes_packed+length_bits*records/8;
if (new_length+1 < min_pack)
{
diff --git a/strings/ctype-uca.c b/strings/ctype-uca.c
index 90568eaff00..01c97389b43 100644
--- a/strings/ctype-uca.c
+++ b/strings/ctype-uca.c
@@ -32352,7 +32352,7 @@ typedef struct my_coll_lexem_st
/*
- Initialize collation rule lexical anilizer
+ Initialize collation rule lexical analizer
SYNOPSIS
my_coll_lexem_init
diff --git a/strings/string.doc b/strings/string.doc
index 09572c968d4..4a4c3d626c5 100644
--- a/strings/string.doc
+++ b/strings/string.doc
@@ -1,4 +1,4 @@
-Speciella användbara nya string-rutiner:
+Speciella anv�ndbara nya string-rutiner:
bcmp(s1, s2, len) returns 0 if the "len" bytes starting at "s1" are
identical to the "len" bytes starting at "s2", non-zero if they are
@@ -32,10 +32,10 @@ Speciella anv
A better inplementation of the UNIX ctype(3) library.
Notes: global.h should be included before ctype.h
- Se efter i filen \c\local\include\m_ctype.h
- - Används istället för ctype.h för att klara internationella karakterer.
+ - Anv�nds ist�llet f�r ctype.h f�r att klara internationella karakterer.
m_string.h
- Använd instället för string.h för att supporta snabbare strängfunktioner.
+ Anv�nd inst�llet f�r string.h f�r att supporta snabbare str�ngfunktioner.
strintstr(src, from, pat) looks for an instance of pat in src
backwards from pos from. pat is not a regex(3) pattern, it is a literal
@@ -45,7 +45,7 @@ Speciella anv
strappend(dest, len, fill) appends fill-characters to a string so that
the result length == len. If the string is longer than len it's
- trunked. The des+len character is allways set to NULL.
+ trunked. The des+len character is always set to NULL.
strcat(s, t) concatenates t on the end of s. There had better be
enough room in the space s points to; strcat has no way to tell.
@@ -55,7 +55,7 @@ Speciella anv
rather than
strcat(strcat(strcat(strcpy(s,a),b),c),d).
strcat returns the old value of s.
- - Använd inte strcat, använd strmov (se ovan).
+ - Anv�nd inte strcat, anv�nd strmov (se ovan).
strcend(s, c) returns a pointer to the first place in s where c
occurs, or a pointer to the end-null of s if c does not occur in s.
@@ -69,7 +69,7 @@ Speciella anv
the end of strings. It is redundant, because strchr(s,'\0') could
strfill(dest, len, fill) makes a string of fill-characters. The result
- string is of length == len. The des+len character is allways set to NULL.
+ string is of length == len. The des+len character is always set to NULL.
strfill() returns pointer to dest+len;
strfind(src, pat) looks for an instance of pat in src. pat is not a
diff --git a/support-files/mysql.server.sh b/support-files/mysql.server.sh
index dd8cbd4850e..e259fc542de 100644
--- a/support-files/mysql.server.sh
+++ b/support-files/mysql.server.sh
@@ -45,7 +45,7 @@
basedir=
datadir=
-# Default value, in seconds, afterwhich the script should timeout waiting
+# Default value, in seconds, after which the script should timeout waiting
# for server start.
# Value here is overridden by value in my.cnf.
# 0 means don't wait at all
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]