This is an automated email from the ASF dual-hosted git repository.
klesh pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-devlake.git
The following commit(s) were added to refs/heads/main by this push:
new b68c102f2 Add codespell support with configuration and fixes (#8761)
b68c102f2 is described below
commit b68c102f2534bf70162539ddd8ffd5848e6edf4e
Author: Yaroslav Halchenko <[email protected]>
AuthorDate: Thu Mar 19 10:40:06 2026 -0400
Add codespell support with configuration and fixes (#8761)
* ci(codespell): add codespell config and GitHub Actions workflow
Add .codespellrc with skip patterns for generated files,
camelCase/PascalCase
ignore-regex, and project-specific word list (convertor, crypted, te, thur).
Add GitHub Actions workflow to run codespell on push to main and PRs.
Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <[email protected]>
Signed-off-by: Yaroslav Halchenko <[email protected]>
* fix(codespell): fix ambiguous typos requiring context review
Manual fixes for typos that needed human review to avoid breaking code:
- Comment/string typos: occured->occurred, destory->destroy,
writting->writing,
retreive->retrieve, identifer->identifier, etc.
- Struct field comments and documentation corrections
- Migration script comment fixes (preserving Go identifiers like
DataConvertor)
Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <[email protected]>
Signed-off-by: Yaroslav Halchenko <[email protected]>
* fix(codespell): fix non-ambiguous typos with codespell -w
Automated fix via `codespell -w` for clear-cut typos across backend,
config-ui,
and grafana dashboards. Examples: sucess->success, occurence->occurrence,
exeucte->execute, asynchornous->asynchronous, Grafana panel typos, etc.
Co-Authored-By: Claude Code 2.1.63 / Claude Opus 4.6 <[email protected]>
Signed-off-by: Yaroslav Halchenko <[email protected]>
---------
Signed-off-by: Yaroslav Halchenko <[email protected]>
Co-authored-by: Claude Code 2.1.63 / Claude Opus 4.6 <[email protected]>
---
.codespellrc | 28 ++++++++++++++++
.github/actions/auto-cherry-pick/action.yml | 2 +-
.github/workflows/codespell.yml | 39 ++++++++++++++++++++++
backend/core/models/domainlayer/README.md | 2 +-
backend/core/models/locking.go | 2 +-
backend/core/plugin/plugin_blueprint.go | 4 +--
backend/core/plugin/plugin_task.go | 2 +-
backend/core/utils/network_helper_test.go | 2 +-
backend/helpers/e2ehelper/data_flow_tester.go | 2 +-
backend/helpers/migrationhelper/migrationhelper.go | 6 ++--
.../migrationhelper/migrationhelper_test.go | 16 ++++-----
.../helpers/pluginhelper/api/api_async_client.go | 2 +-
backend/helpers/pluginhelper/api/api_client.go | 2 +-
.../helpers/pluginhelper/api/batch_save_divider.go | 4 +--
.../pluginhelper/api/data_convertor_stateful.go | 2 +-
.../api/ds_remote_api_scope_list_helper.go | 2 +-
backend/helpers/pluginhelper/api/iterator.go | 4 +--
.../helpers/pluginhelper/api/model_api_helper.go | 4 +--
.../helpers/pluginhelper/api/worker_scheduler.go | 4 +--
backend/helpers/pluginhelper/csv_file_test.go | 4 +--
backend/helpers/srvhelper/model_service_helper.go | 2 +-
.../helpers/srvhelper/scope_service_helper_test.go | 2 +-
backend/impls/context/default_basic_res.go | 2 +-
backend/impls/dalgorm/dalgorm.go | 4 +--
backend/plugins/ae/api/connection.go | 2 +-
backend/plugins/bamboo/api/connection_api.go | 10 +++---
backend/plugins/bamboo/models/plan.go | 2 +-
backend/plugins/bamboo/tasks/shared.go | 2 +-
backend/plugins/customize/service/service.go | 2 +-
backend/plugins/dora/api/data.go | 6 ++--
.../models/migrationscripts/archived/connection.go | 2 +-
backend/plugins/gitextractor/parser/repo_gogit.go | 2 +-
backend/plugins/gitlab/e2e/job_test.go | 2 +-
.../models/migrationscripts/archived/connection.go | 2 +-
backend/plugins/gitlab/tasks/shared.go | 2 +-
.../tasks/issue_status_history_convertor.go | 2 +-
backend/plugins/jenkins/models/build.go | 2 +-
.../20220916_modify_jenkins_build.go | 4 +--
.../20221131_add_fullName_for_builds.go | 4 +--
.../models/migrationscripts/archived/build.go | 2 +-
.../migrationscripts/20220716_add_init_tables.go | 2 +-
.../models/migrationscripts/archived/source.go | 2 +-
backend/plugins/jira/tasks/issue_extractor.go | 2 +-
backend/plugins/linker/impl/impl.go | 2 +-
backend/plugins/refdiff/tasks/refdiff_task_data.go | 6 ++--
backend/plugins/tapd/api/blueprint_v200.go | 2 +-
.../migrationscripts/archived/tapd_connection.go | 2 +-
.../models/migrationscripts/archived/connection.go | 2 +-
.../tasks/execution_summary_dev_extractor.go | 2 +-
backend/plugins/zentao/tasks/task_collector.go | 2 +-
backend/python/pydevlake/pydevlake/api.py | 2 +-
backend/server/api/README.md | 2 +-
backend/server/services/blueprint.go | 2 +-
backend/server/services/pipeline.go | 6 ++--
backend/server/services/pipeline_runner.go | 4 +--
backend/server/services/project.go | 2 +-
backend/test/helper/client.go | 2 +-
.../src/plugins/register/github/transformation.tsx | 2 +-
.../detail/components/sync-policy/index.tsx | 12 +++----
config-ui/src/routes/onboard/components/card.tsx | 4 +--
config-ui/src/routes/onboard/index.tsx | 2 +-
config-ui/src/routes/onboard/styled.ts | 12 +++----
.../DeliveryQuality(RequireJiraAndGitlabData).json | 2 +-
grafana/_archive/Gitlab.json | 4 +--
grafana/dashboards/DORADebug.json | 4 +--
.../dashboards/DORADetails-ChangeFailureRate.json | 2 +-
...moHowFastDoWeRespondToCustomerRequirements.json | 2 +-
67 files changed, 174 insertions(+), 107 deletions(-)
diff --git a/.codespellrc b/.codespellrc
new file mode 100644
index 000000000..a10c0a3c6
--- /dev/null
+++ b/.codespellrc
@@ -0,0 +1,28 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+[codespell]
+# Ref: https://github.com/codespell-project/codespell#using-a-config-file
+skip =
.git,.gitignore,.gitattributes,*.svg,go.sum,*.lock,*.css,.codespellrc,.cache,.npm,.yarn,*/e2e/raw_tables/*,*/e2e/snapshot_tables/*
+check-hidden = true
+# Ignore camelCase and PascalCase identifiers (common in Go and TypeScript
code)
+ignore-regex = \b[a-z]+[A-Z]\w*\b|\b[A-Z][a-z]+[A-Z]\w*\b
+# convertor,convertors - project's deliberate spelling for Go types and
filenames (DataConvertor, etc.)
+# crypted - variable name in AES encrypt/decrypt functions
+# te - Tapd API field name (Te/te struct fields)
+# thur - Thursday abbreviation in Grafana dashboard SQL column alias
+ignore-words-list = convertor,convertors,crypted,te,thur
diff --git a/.github/actions/auto-cherry-pick/action.yml
b/.github/actions/auto-cherry-pick/action.yml
index 94eafe36f..054ffbcf3 100644
--- a/.github/actions/auto-cherry-pick/action.yml
+++ b/.github/actions/auto-cherry-pick/action.yml
@@ -19,7 +19,7 @@ name: "Auto Cherry Pick"
description: "cherry pick commits from Pull Requests into Release branch"
inputs:
trigger_label_prefix:
- description: "The trigger label prefic"
+ description: "The trigger label prefix"
default: "needs-cherrypick-"
required: false
author_email:
diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml
new file mode 100644
index 000000000..7697e7bd0
--- /dev/null
+++ b/.github/workflows/codespell.yml
@@ -0,0 +1,39 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Codespell configuration is within .codespellrc
+---
+name: Codespell
+
+on:
+ push:
+ branches: [main]
+ pull_request:
+ branches: [main]
+
+permissions:
+ contents: read
+
+jobs:
+ codespell:
+ name: Check for spelling errors
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v4
+ - name: Codespell
+ uses: codespell-project/actions-codespell@v2
diff --git a/backend/core/models/domainlayer/README.md
b/backend/core/models/domainlayer/README.md
index e6b85d4bb..18f082db2 100644
--- a/backend/core/models/domainlayer/README.md
+++ b/backend/core/models/domainlayer/README.md
@@ -57,7 +57,7 @@ The following rules make sure Domain Layer Entities serve its
purpose
- Read data from platform specific table, convert and store record into one(or
multiple) domain table(s)
- Generate its own `Id` accordingly
-- Generate foreign key accordlingly
+- Generate foreign key accordingly
- Fields conversion
Sample code:
diff --git a/backend/core/models/locking.go b/backend/core/models/locking.go
index 9510e24f3..ff2b1cdd5 100644
--- a/backend/core/models/locking.go
+++ b/backend/core/models/locking.go
@@ -19,7 +19,7 @@ package models
import "time"
-// LockingHistory is desgned for preventing mutiple delake instances from
sharing the same database which may cause
+// LockingHistory is desgned for preventing multiple delake instances from
sharing the same database which may cause
// problems like #3537, #3466. It works by the following step:
//
// 1. Each devlake insert a record to this table whie `Succeeded=false`
diff --git a/backend/core/plugin/plugin_blueprint.go
b/backend/core/plugin/plugin_blueprint.go
index 492e54d43..35c36b36b 100644
--- a/backend/core/plugin/plugin_blueprint.go
+++ b/backend/core/plugin/plugin_blueprint.go
@@ -78,9 +78,9 @@ type DataSourcePluginBlueprintV200 interface {
// BlueprintConnectionV200 contains the pluginName/connectionId and related
Scopes,
// MetricPluginBlueprintV200 is similar to the DataSourcePluginBlueprintV200
-// but for Metric Plugin, take dora as an example, it doens't have any scope,
+// but for Metric Plugin, take dora as an example, it doesn't have any scope,
// nor does it produce any, however, it does require other plugin to be
-// executed beforehand, like calcuating refdiff before it can connect PR to the
+// executed beforehand, like calculating refdiff before it can connect PR to
the
// right Deployment keep in mind it would be called IFF the plugin was enabled
// for the project.
type MetricPluginBlueprintV200 interface {
diff --git a/backend/core/plugin/plugin_task.go
b/backend/core/plugin/plugin_task.go
index 136f4f17e..e1404ff08 100644
--- a/backend/core/plugin/plugin_task.go
+++ b/backend/core/plugin/plugin_task.go
@@ -106,7 +106,7 @@ type SubTaskMeta struct {
Dependencies []*SubTaskMeta
DependencyTables []string
ProductTables []string
- ForceRunOnResume bool // Should a subtask be ran dispite it was
finished before
+ ForceRunOnResume bool // Should a subtask be ran despite it was
finished before
}
// PluginTask Implement this interface to let framework run tasks for you
diff --git a/backend/core/utils/network_helper_test.go
b/backend/core/utils/network_helper_test.go
index 86f76fbb1..a46d4096b 100644
--- a/backend/core/utils/network_helper_test.go
+++ b/backend/core/utils/network_helper_test.go
@@ -53,7 +53,7 @@ func TestResolvePort(t *testing.T) {
}
_, err = ResolvePort("", "rabbitmq")
if err == nil {
- t.Errorf("Expected error %s, Got nil", "schema not fount")
+ t.Errorf("Expected error %s, Got nil", "schema not found")
}
_, err = ResolvePort("", "")
if err == nil {
diff --git a/backend/helpers/e2ehelper/data_flow_tester.go
b/backend/helpers/e2ehelper/data_flow_tester.go
index 69b6830e2..5a2163ef7 100644
--- a/backend/helpers/e2ehelper/data_flow_tester.go
+++ b/backend/helpers/e2ehelper/data_flow_tester.go
@@ -112,7 +112,7 @@ func NewDataFlowTester(t *testing.T, pluginName string,
pluginMeta plugin.Plugin
cfg.Set(`DB_URL`, cfg.GetString(`E2E_DB_URL`))
db, err := runner.NewGormDb(cfg, logruslog.Global)
if err != nil {
- // if here fail with error `acces denied for user` you need to
create database by your self as follow command
+ // if here fail with error `access denied for user` you need to
create database by your self as follow command
// create databases lake_test;
// grant all on lake_test.* to 'merico'@'%';
panic(err)
diff --git a/backend/helpers/migrationhelper/migrationhelper.go
b/backend/helpers/migrationhelper/migrationhelper.go
index 24eb537a7..691413b1a 100644
--- a/backend/helpers/migrationhelper/migrationhelper.go
+++ b/backend/helpers/migrationhelper/migrationhelper.go
@@ -32,7 +32,7 @@ import (
helper "github.com/apache/incubator-devlake/helpers/pluginhelper/api"
)
-// AutoMigrateTables runs AutoMigrate for muliple tables
+// AutoMigrateTables runs AutoMigrate for multiple tables
func AutoMigrateTables(basicRes context.BasicRes, dst ...interface{})
errors.Error {
db := basicRes.GetDal()
for _, entity := range dst {
@@ -74,14 +74,14 @@ func ChangeColumnsType[D any](
err = db.AutoMigrate(new(D), dal.From(tableName))
if err != nil {
- return errors.Default.Wrap(err, "AutoMigrate for Add Colume
Error")
+ return errors.Default.Wrap(err, "AutoMigrate for Add Column
Error")
}
defer func() {
if err != nil {
err1 := db.DropColumns(tableName, columns...)
if err1 != nil {
- err = errors.Default.Wrap(err,
fmt.Sprintf("RollBack by DropColume failed.Relevant data needs to be repaired
manually.%s", err1.Error()))
+ err = errors.Default.Wrap(err,
fmt.Sprintf("RollBack by DropColumn failed.Relevant data needs to be repaired
manually.%s", err1.Error()))
}
}
}()
diff --git a/backend/helpers/migrationhelper/migrationhelper_test.go
b/backend/helpers/migrationhelper/migrationhelper_test.go
index 5fe70d52b..7962100e4 100644
--- a/backend/helpers/migrationhelper/migrationhelper_test.go
+++ b/backend/helpers/migrationhelper/migrationhelper_test.go
@@ -132,7 +132,7 @@ func TestTransformTable(t *testing.T) {
assert.Equal(t, dts[2].Id,
"fd61a03af4f77d870fc21e05e7e80678095c92d808cfb3b5c279ee04c74aca1357ef3d346f24f386216563752b0c447a35c041e0b7143f929dc4de27742e3307")
}).Return(nil).Once()
- // for Primarykey autoincrement cheking
+ // for Primarykey autoincrement checking
mockDal.On("GetColumns", mock.Anything, mock.Anything).Run(func(args
mock.Arguments) {
tableName := args.Get(0).(dal.Tabler).TableName()
assert.Equal(t, tableName, TestTableNameSrc)
@@ -188,7 +188,7 @@ func TestTransformTable_RollBack(t *testing.T) {
assert.NotEqual(t, oldname, tmpname)
}).Return(nil).Once()
- // checking if Rename and Drop RollBack working with rigth table
+ // checking if Rename and Drop RollBack working with right table
mockDal.On("RenameTable", mock.Anything, mock.Anything).Run(func(args
mock.Arguments) {
tmpname, ok := args.Get(0).(string)
assert.Equal(t, ok, true)
@@ -203,7 +203,7 @@ func TestTransformTable_RollBack(t *testing.T) {
assert.Equal(t, oldname, TestTableNameSrc)
}).Return(nil).Once()
- // for Primarykey autoincrement cheking
+ // for Primarykey autoincrement checking
mockDal.On("GetColumns", mock.Anything, mock.Anything).Run(func(args
mock.Arguments) {
tableName := args.Get(0).(dal.Tabler).TableName()
assert.Equal(t, tableName, TestTableNameSrc)
@@ -296,7 +296,7 @@ func TestCopyTableColumns(t *testing.T) {
assert.Equal(t, dts[2].Id,
"fd61a03af4f77d870fc21e05e7e80678095c92d808cfb3b5c279ee04c74aca1357ef3d346f24f386216563752b0c447a35c041e0b7143f929dc4de27742e3307")
}).Return(nil).Once()
- // for Primarykey autoincrement cheking
+ // for Primarykey autoincrement checking
mockDal.On("GetColumns", mock.Anything, mock.Anything).Run(func(args
mock.Arguments) {
tableName := args.Get(0).(dal.Tabler).TableName()
assert.Equal(t, tableName, TestTableNameSrc)
@@ -352,7 +352,7 @@ func TestCopyTableColumns_RollBack(t *testing.T) {
assert.NotEqual(t, oldname, tmpname)
}).Return(nil).Once()
- // checking if Rename and Drop RollBack working with rigth table
+ // checking if Rename and Drop RollBack working with right table
mockDal.On("RenameTable", mock.Anything, mock.Anything).Run(func(args
mock.Arguments) {
tmpname, ok := args.Get(0).(string)
assert.Equal(t, ok, true)
@@ -367,7 +367,7 @@ func TestCopyTableColumns_RollBack(t *testing.T) {
assert.Equal(t, oldname, TestTableNameSrc)
}).Return(nil).Once()
- // for Primarykey autoincrement cheking
+ // for Primarykey autoincrement checking
mockDal.On("GetColumns", mock.Anything, mock.Anything).Run(func(args
mock.Arguments) {
tableName := args.Get(0).(dal.Tabler).TableName()
assert.Equal(t, tableName, TestTableNameSrc)
@@ -521,7 +521,7 @@ func TestTransformColumns_RollBack(t *testing.T) {
assert.NotEqual(t, columnName, tmpColumnName)
}).Return(nil).Once()
- // checking if Rename and Drop RollBack working with rigth table
+ // checking if Rename and Drop RollBack working with right table
mockDal.On("RenameColumn", mock.Anything, mock.Anything,
mock.Anything).Run(func(args mock.Arguments) {
tableName, ok := args.Get(0).(string)
assert.Equal(t, ok, true)
@@ -634,7 +634,7 @@ func TestChangeColumnsType_Rollback(t *testing.T) {
assert.NotEqual(t, columnName, tmpColumnName)
}).Return(nil).Once()
- // checking if Rename and Drop RollBack working with rigth table
+ // checking if Rename and Drop RollBack working with right table
mockDal.On("RenameColumn", mock.Anything, mock.Anything,
mock.Anything).Run(func(args mock.Arguments) {
tableName, ok := args.Get(0).(string)
assert.Equal(t, ok, true)
diff --git a/backend/helpers/pluginhelper/api/api_async_client.go
b/backend/helpers/pluginhelper/api/api_async_client.go
index 79ef16f1e..c926c33ac 100644
--- a/backend/helpers/pluginhelper/api/api_async_client.go
+++ b/backend/helpers/pluginhelper/api/api_async_client.go
@@ -122,7 +122,7 @@ func CreateAsyncApiClient(
return nil, errors.Default.Wrap(err, "failed to create
scheduler")
}
- // finally, wrap around api client with async sematic
+ // finally, wrap around api client with async semantic
return &ApiAsyncClient{
apiClient,
scheduler,
diff --git a/backend/helpers/pluginhelper/api/api_client.go
b/backend/helpers/pluginhelper/api/api_client.go
index b0cfccf49..7354a29db 100644
--- a/backend/helpers/pluginhelper/api/api_client.go
+++ b/backend/helpers/pluginhelper/api/api_client.go
@@ -432,7 +432,7 @@ func UnmarshalResponse(res *http.Response, v interface{})
errors.Error {
if err != nil {
statusCode := res.StatusCode
if statusCode == http.StatusUnauthorized || statusCode ==
http.StatusForbidden {
- statusCode = http.StatusBadRequest // to avoid Basic
Auth Dialog poping up
+ statusCode = http.StatusBadRequest // to avoid Basic
Auth Dialog popping up
}
return errors.HttpStatus(statusCode).Wrap(err,
fmt.Sprintf("error decoding response from %s: raw response: %s",
res.Request.URL.String(), string(resBody)))
}
diff --git a/backend/helpers/pluginhelper/api/batch_save_divider.go
b/backend/helpers/pluginhelper/api/batch_save_divider.go
index 406d638e9..393bbcc9b 100644
--- a/backend/helpers/pluginhelper/api/batch_save_divider.go
+++ b/backend/helpers/pluginhelper/api/batch_save_divider.go
@@ -73,10 +73,10 @@ func (d *BatchSaveDivider) ForType(rowType reflect.Type)
(*BatchSave, errors.Err
rowElemType := rowType.Elem()
d.log.Debug("missing BatchSave for type %s", rowElemType.Name())
row := reflect.New(rowElemType).Interface()
- // check if rowType had RawDataOrigin embeded
+ // check if rowType had RawDataOrigin embedded
field, hasField := rowElemType.FieldByName("RawDataOrigin")
if !hasField || field.Type !=
reflect.TypeOf(common.RawDataOrigin{}) {
- return nil, errors.Default.New(fmt.Sprintf("type %s
must have RawDataOrigin embeded", rowElemType.Name()))
+ return nil, errors.Default.New(fmt.Sprintf("type %s
must have RawDataOrigin embedded", rowElemType.Name()))
}
d.batches[rowType] = batch
if !d.incrementalMode {
diff --git a/backend/helpers/pluginhelper/api/data_convertor_stateful.go
b/backend/helpers/pluginhelper/api/data_convertor_stateful.go
index 1c5292dc2..f742df83c 100644
--- a/backend/helpers/pluginhelper/api/data_convertor_stateful.go
+++ b/backend/helpers/pluginhelper/api/data_convertor_stateful.go
@@ -192,7 +192,7 @@ func (converter *StatefulDataConverter[InputType])
Execute() errors.Error {
if err != nil {
return err
}
- // save the incremantal state
+ // save the incremental state
return converter.SubtaskStateManager.Close()
}
diff --git
a/backend/helpers/pluginhelper/api/ds_remote_api_scope_list_helper.go
b/backend/helpers/pluginhelper/api/ds_remote_api_scope_list_helper.go
index 076b2c6f8..83fef0612 100644
--- a/backend/helpers/pluginhelper/api/ds_remote_api_scope_list_helper.go
+++ b/backend/helpers/pluginhelper/api/ds_remote_api_scope_list_helper.go
@@ -31,7 +31,7 @@ const (
RAS_ENTRY_TYPE_SCOPE = "scope"
)
-// DsListRemoteScopes is the function type for listing remote scopes that must
be implmeneted by the plugin
+// DsListRemoteScopes is the function type for listing remote scopes that must
be implemented by the plugin
type DsListRemoteScopes[C plugin.ToolLayerApiConnection, S
plugin.ToolLayerScope, P any] func(
connection *C, apiClient plugin.ApiClient, groupId string, page P)
(children []models.DsRemoteApiScopeListEntry[S], nextPage *P, errr errors.Error)
diff --git a/backend/helpers/pluginhelper/api/iterator.go
b/backend/helpers/pluginhelper/api/iterator.go
index 37ffd2b85..6f991eb90 100644
--- a/backend/helpers/pluginhelper/api/iterator.go
+++ b/backend/helpers/pluginhelper/api/iterator.go
@@ -55,7 +55,7 @@ func NewBatchedDalCursorIterator(db dal.Dal, cursor dal.Rows,
elemType reflect.T
}, nil
}
-// HasNext increments the row curser. If we're at the end, it'll return false.
+// HasNext increments the row cursor. If we're at the end, it'll return false.
func (c *DalCursorIterator) HasNext() bool {
return c.cursor.Next()
}
@@ -149,7 +149,7 @@ type QueueIterator struct {
queue *Queue
}
-// HasNext increments the row curser. If we're at the end, it'll return false.
+// HasNext increments the row cursor. If we're at the end, it'll return false.
func (q *QueueIterator) HasNext() bool {
return q.queue.GetCount() > 0
}
diff --git a/backend/helpers/pluginhelper/api/model_api_helper.go
b/backend/helpers/pluginhelper/api/model_api_helper.go
index 3e35a7c5d..809192aec 100644
--- a/backend/helpers/pluginhelper/api/model_api_helper.go
+++ b/backend/helpers/pluginhelper/api/model_api_helper.go
@@ -226,11 +226,11 @@ func parsePagination[P any](input
*plugin.ApiResourceInput) (*P, errors.Error) {
pagination := new(P)
err := utils.DecodeMapStruct(input.Query, pagination, false)
if err != nil {
- return nil, errors.BadInput.Wrap(err, "faild to decode
pagination from query string")
+ return nil, errors.BadInput.Wrap(err, "failed to decode
pagination from query string")
}
err = utils.DecodeMapStruct(input.Params, pagination, false)
if err != nil {
- return nil, errors.BadInput.Wrap(err, "faild to decode
pagination from path variables")
+ return nil, errors.BadInput.Wrap(err, "failed to decode
pagination from path variables")
}
if e := vld.Struct(pagination); e != nil {
return nil, errors.BadInput.Wrap(e, "invalid pagination
parameters")
diff --git a/backend/helpers/pluginhelper/api/worker_scheduler.go
b/backend/helpers/pluginhelper/api/worker_scheduler.go
index d7cfc5096..74d1c3f70 100644
--- a/backend/helpers/pluginhelper/api/worker_scheduler.go
+++ b/backend/helpers/pluginhelper/api/worker_scheduler.go
@@ -84,7 +84,7 @@ func NewWorkerScheduler(
// It doesn't return error because it wouldn't be any when with a Blocking
semantic, returned error does nothing but
// causing confusion, more often, people thought it is returned by the task.
// Since it is async task, the callframes would not be available for
production mode, you can export Environment
-// Varaible ASYNC_CF=true to enable callframes capturing when debugging.
+// Variable ASYNC_CF=true to enable callframes capturing when debugging.
// IMPORTANT: do NOT call SubmitBlocking inside the async task, it is likely
to cause a deadlock, call
// SubmitNonBlocking instead when number of tasks is relatively small.
func (s *WorkerScheduler) SubmitBlocking(task func() errors.Error) {
@@ -118,7 +118,7 @@ func (s *WorkerScheduler) SubmitBlocking(task func()
errors.Error) {
/*
func (s *WorkerScheduler) gatherCallFrames() string {
- cf := "set Environment Varaible ASYNC_CF=true to enable callframes
capturing"
+ cf := "set Environment Variable ASYNC_CF=true to enable callframes
capturing"
if callframeEnabled {
cf = utils.GatherCallFrames(1)
}
diff --git a/backend/helpers/pluginhelper/csv_file_test.go
b/backend/helpers/pluginhelper/csv_file_test.go
index b24448688..0958ca7d0 100644
--- a/backend/helpers/pluginhelper/csv_file_test.go
+++ b/backend/helpers/pluginhelper/csv_file_test.go
@@ -36,8 +36,8 @@ func TestExampleCsvFile(t *testing.T) {
defer iter.Close()
for iter.HasNext() {
row := iter.Fetch()
- assert.Equal(t, row["name"], "foobar", "name not euqal")
- assert.Equal(t, row["json"], `{"url": "https://example.com"}`,
"json not euqal")
+ assert.Equal(t, row["name"], "foobar", "name not equal")
+ assert.Equal(t, row["json"], `{"url": "https://example.com"}`,
"json not equal")
}
}
diff --git a/backend/helpers/srvhelper/model_service_helper.go
b/backend/helpers/srvhelper/model_service_helper.go
index 9c3d05ece..0ca94827c 100644
--- a/backend/helpers/srvhelper/model_service_helper.go
+++ b/backend/helpers/srvhelper/model_service_helper.go
@@ -89,7 +89,7 @@ func (srv *ModelSrvHelper[M]) ValidateModel(model *M)
errors.Error {
}
// basic validator
if e := srv.validator.Struct(model); e != nil {
- return errors.BadInput.Wrap(e, "validation faild")
+ return errors.BadInput.Wrap(e, "validation failed")
}
return nil
}
diff --git a/backend/helpers/srvhelper/scope_service_helper_test.go
b/backend/helpers/srvhelper/scope_service_helper_test.go
index 1052b8164..54d7979f1 100644
--- a/backend/helpers/srvhelper/scope_service_helper_test.go
+++ b/backend/helpers/srvhelper/scope_service_helper_test.go
@@ -35,7 +35,7 @@ func Test_setDefaultEntities(t *testing.T) {
setDefaultEntities(sc1)
assert.Equal(t, sc1.Entities, plugin.DOMAIN_TYPES)
- // plugin embeded the common ScopeConfig
+ // plugin embedded the common ScopeConfig
sc2 := &struct {
common.ScopeConfig
}{
diff --git a/backend/impls/context/default_basic_res.go
b/backend/impls/context/default_basic_res.go
index cb277a540..1ee4976db 100644
--- a/backend/impls/context/default_basic_res.go
+++ b/backend/impls/context/default_basic_res.go
@@ -36,7 +36,7 @@ func (c *DefaultBasicRes) GetConfigReader()
config.ConfigReader {
return c.cfg
}
-// GetConfig returns the value of the specificed name
+// GetConfig returns the value of the specified name
func (c *DefaultBasicRes) GetConfig(name string) string {
return c.cfg.GetString(name)
}
diff --git a/backend/impls/dalgorm/dalgorm.go b/backend/impls/dalgorm/dalgorm.go
index 3b11312a3..ba635355f 100644
--- a/backend/impls/dalgorm/dalgorm.go
+++ b/backend/impls/dalgorm/dalgorm.go
@@ -277,7 +277,7 @@ func (d *Dalgorm) Delete(entity interface{}, clauses
...dal.Clause) errors.Error
return d.convertGormError(buildTx(d.db, clauses).Delete(entity).Error)
}
-// UpdateColumn allows you to update mulitple records
+// UpdateColumn allows you to update multiple records
func (d *Dalgorm) UpdateColumn(entityOrTable interface{}, columnName string,
value interface{}, clauses ...dal.Clause) errors.Error {
d.unwrapDynamic(&entityOrTable, &clauses)
if expr, ok := value.(dal.DalClause); ok {
@@ -286,7 +286,7 @@ func (d *Dalgorm) UpdateColumn(entityOrTable interface{},
columnName string, val
return d.convertGormError(buildTx(d.db, clauses).Update(columnName,
value).Error)
}
-// UpdateColumns allows you to update multiple columns of mulitple records
+// UpdateColumns allows you to update multiple columns of multiple records
func (d *Dalgorm) UpdateColumns(entityOrTable interface{}, set []dal.DalSet,
clauses ...dal.Clause) errors.Error {
d.unwrapDynamic(&entityOrTable, &clauses)
updatesSet := make(map[string]interface{})
diff --git a/backend/plugins/ae/api/connection.go
b/backend/plugins/ae/api/connection.go
index 2e42c6152..cc3a3889e 100644
--- a/backend/plugins/ae/api/connection.go
+++ b/backend/plugins/ae/api/connection.go
@@ -52,7 +52,7 @@ func testConnection(ctx context.Context, connection
models.AeConn) (*plugin.ApiR
return &plugin.ApiResourceOutput{Body: true, Status: 200}, nil
case 401: // error secretKey or nonceStr
return &plugin.ApiResourceOutput{Body: false, Status:
http.StatusBadRequest}, nil
- default: // unknow what happen , back to user
+ default: // unknown what happen , back to user
return &plugin.ApiResourceOutput{Body: res.Body, Status:
res.StatusCode}, nil
}
}
diff --git a/backend/plugins/bamboo/api/connection_api.go
b/backend/plugins/bamboo/api/connection_api.go
index 3fd72a22e..4c142fc5d 100644
--- a/backend/plugins/bamboo/api/connection_api.go
+++ b/backend/plugins/bamboo/api/connection_api.go
@@ -108,7 +108,7 @@ func TestExistingConnection(input *plugin.ApiResourceInput)
(*plugin.ApiResource
// @Param body body models.BambooConnection true "json body"
// @Success 200 {object} models.BambooConnection
// @Failure 400 {string} errcode.Error "Bad Request"
-// @Failure 500 {string} errcode.Error "Internel Error"
+// @Failure 500 {string} errcode.Error "Internal Error"
// @Router /plugins/bamboo/connections [POST]
func PostConnections(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput, errors.Error) {
return dsHelper.ConnApi.Post(input)
@@ -121,7 +121,7 @@ func PostConnections(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput,
// @Param connectionId path int true "connection ID"
// @Success 200 {object} models.BambooConnection
// @Failure 400 {string} errcode.Error "Bad Request"
-// @Failure 500 {string} errcode.Error "Internel Error"
+// @Failure 500 {string} errcode.Error "Internal Error"
// @Router /plugins/bamboo/connections/{connectionId} [PATCH]
func PatchConnection(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput, errors.Error) {
return dsHelper.ConnApi.Patch(input)
@@ -134,7 +134,7 @@ func PatchConnection(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput,
// @Success 200 {object} models.BambooConnection
// @Failure 400 {string} errcode.Error "Bad Request"
// @Failure 409 {object} services.BlueprintProjectPairs "References exist to
this connection"
-// @Failure 500 {string} errcode.Error "Internel Error"
+// @Failure 500 {string} errcode.Error "Internal Error"
// @Router /plugins/bamboo/connections/{connectionId} [DELETE]
func DeleteConnection(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput, errors.Error) {
return dsHelper.ConnApi.Delete(input)
@@ -145,7 +145,7 @@ func DeleteConnection(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput
// @Tags plugins/bamboo
// @Success 200 {object} []models.BambooConnection
// @Failure 400 {string} errcode.Error "Bad Request"
-// @Failure 500 {string} errcode.Error "Internel Error"
+// @Failure 500 {string} errcode.Error "Internal Error"
// @Router /plugins/bamboo/connections [GET]
func ListConnections(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput, errors.Error) {
return dsHelper.ConnApi.GetAll(input)
@@ -157,7 +157,7 @@ func ListConnections(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput,
// @Param connectionId path int true "connection ID"
// @Success 200 {object} models.BambooConnection
// @Failure 400 {string} errcode.Error "Bad Request"
-// @Failure 500 {string} errcode.Error "Internel Error"
+// @Failure 500 {string} errcode.Error "Internal Error"
// @Router /plugins/bamboo/connections/{connectionId} [GET]
func GetConnection(input *plugin.ApiResourceInput) (*plugin.ApiResourceOutput,
errors.Error) {
return dsHelper.ConnApi.GetDetail(input)
diff --git a/backend/plugins/bamboo/models/plan.go
b/backend/plugins/bamboo/models/plan.go
index 3811596f1..386fbbbc4 100644
--- a/backend/plugins/bamboo/models/plan.go
+++ b/backend/plugins/bamboo/models/plan.go
@@ -110,7 +110,7 @@ type SearchEntity struct {
Type string `json:"type"`
}
-// Name trys to keep plan's name field the same with name in /remote-scopes.
+// Name tries to keep plan's name field the same with name in /remote-scopes.
// In /remote-scopes, plan's name is "{projectName - planName}".
func (entity SearchEntity) Name() string {
return strings.Join([]string{entity.ProjectName, entity.PlanName}, " -
")
diff --git a/backend/plugins/bamboo/tasks/shared.go
b/backend/plugins/bamboo/tasks/shared.go
index 182670b95..fc715aa11 100644
--- a/backend/plugins/bamboo/tasks/shared.go
+++ b/backend/plugins/bamboo/tasks/shared.go
@@ -139,7 +139,7 @@ func generateFakeRepoUrl(endpoint string, repoId int)
(string, error) {
return fmt.Sprintf("fake://%s/repos/%d", endpointURL.Host, repoId), nil
}
-// covertError will indentify some known errors and transform it to a simple
form.
+// covertError will identify some known errors and transform it to a simple
form.
func covertError(err errors.Error) errors.Error {
if err == nil {
return nil
diff --git a/backend/plugins/customize/service/service.go
b/backend/plugins/customize/service/service.go
index 5ef3bcbb3..8217be3a5 100644
--- a/backend/plugins/customize/service/service.go
+++ b/backend/plugins/customize/service/service.go
@@ -550,7 +550,7 @@ func (s *Service) qaTestCaseExecutionHandler(qaProjectId
string) func(record map
}
// issueRepoCommitHandlerFactory returns a handler that will populate the
`issue_commits` and `issue_repo_commits` table
-// ths issueCommitsFields is used to filter the fields that should be inserted
into the `issue_commits` table
+// the issueCommitsFields is used to filter the fields that should be inserted
into the `issue_commits` table
func (s *Service) issueRepoCommitHandler(record map[string]interface{})
errors.Error {
err := s.dal.CreateWithMap(&crossdomain.IssueRepoCommit{}, record)
if err != nil {
diff --git a/backend/plugins/dora/api/data.go b/backend/plugins/dora/api/data.go
index baada7a4c..6f610ffa4 100644
--- a/backend/plugins/dora/api/data.go
+++ b/backend/plugins/dora/api/data.go
@@ -25,7 +25,7 @@ import (
const RAW_DEPLOYMENTS_TABLE = `dora_deplyments`
-//TODO Please modify the folowing code to adapt to your plugin
+//TODO Please modify the following code to adapt to your plugin
/*
POST /plugins/dora/deployments
{
@@ -39,7 +39,7 @@ func PostDeployments(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput,
const RAW_ISSUES_TABLE = `dora_issues`
-//TODO Please modify the folowing code to adapt to your plugin
+//TODO Please modify the following code to adapt to your plugin
/*
POST /plugins/dora/issues
{
@@ -51,7 +51,7 @@ func PostIssues(input *plugin.ApiResourceInput)
(*plugin.ApiResourceOutput, erro
return &plugin.ApiResourceOutput{Body: nil, Status: http.StatusOK}, nil
}
-//TODO Please modify the folowing code to adapt to your plugin
+//TODO Please modify the following code to adapt to your plugin
/*
POST /plugins/dora/issues/:id/close
{
diff --git
a/backend/plugins/gitee/models/migrationscripts/archived/connection.go
b/backend/plugins/gitee/models/migrationscripts/archived/connection.go
index 55af7fc77..14370ed58 100644
--- a/backend/plugins/gitee/models/migrationscripts/archived/connection.go
+++ b/backend/plugins/gitee/models/migrationscripts/archived/connection.go
@@ -30,7 +30,7 @@ type RestConnection struct {
BaseConnection `mapstructure:",squash"`
Endpoint string `mapstructure:"endpoint" validate:"required"
json:"endpoint"`
Proxy string `mapstructure:"proxy" json:"proxy"`
- RateLimitPerHour int `comment:"api request rate limt per hour"
json:"rateLimit"`
+ RateLimitPerHour int `comment:"api request rate limit per hour"
json:"rateLimit"`
}
type BaseConnection struct {
diff --git a/backend/plugins/gitextractor/parser/repo_gogit.go
b/backend/plugins/gitextractor/parser/repo_gogit.go
index 21cf09cd9..8837aa383 100644
--- a/backend/plugins/gitextractor/parser/repo_gogit.go
+++ b/backend/plugins/gitextractor/parser/repo_gogit.go
@@ -400,7 +400,7 @@ func (r *GogitRepoCollector) storeParentCommits(commitSha
string, commit *object
for i := 0; i < commit.NumParents(); i++ {
parent, err := commit.Parent(i)
if err != nil {
- // parent commit might not exist when repo is shallow
cloned (tradeoff of supporting timeAfter paramenter)
+ // parent commit might not exist when repo is shallow
cloned (tradeoff of supporting timeAfter parameter)
if err.Error() == "object not found" {
continue
}
diff --git a/backend/plugins/gitlab/e2e/job_test.go
b/backend/plugins/gitlab/e2e/job_test.go
index 1705f59a8..64f458240 100644
--- a/backend/plugins/gitlab/e2e/job_test.go
+++ b/backend/plugins/gitlab/e2e/job_test.go
@@ -71,7 +71,7 @@ func TestGitlabJobDataFlow(t *testing.T) {
),
)
- // verifi when production regex is omitted
+ // verify when production regex is omitted
dataflowTester.FlushTabler(&devops.CICDTask{})
dataflowTester.Subtask(tasks.ConvertJobMeta, taskData)
dataflowTester.VerifyTableWithOptions(&devops.CICDTask{},
e2ehelper.TableOptions{
diff --git
a/backend/plugins/gitlab/models/migrationscripts/archived/connection.go
b/backend/plugins/gitlab/models/migrationscripts/archived/connection.go
index f7b7f9f67..7d717b5dd 100644
--- a/backend/plugins/gitlab/models/migrationscripts/archived/connection.go
+++ b/backend/plugins/gitlab/models/migrationscripts/archived/connection.go
@@ -31,7 +31,7 @@ type RestConnection struct {
BaseConnection `mapstructure:",squash"`
Endpoint string `mapstructure:"endpoint" validate:"required"
json:"endpoint"`
Proxy string `mapstructure:"proxy" json:"proxy"`
- RateLimitPerHour int `comment:"api request rate limt per hour"
json:"rateLimit"`
+ RateLimitPerHour int `comment:"api request rate limit per hour"
json:"rateLimit"`
}
type BaseConnection struct {
diff --git a/backend/plugins/gitlab/tasks/shared.go
b/backend/plugins/gitlab/tasks/shared.go
index 3c100d7dd..6621ba3f2 100644
--- a/backend/plugins/gitlab/tasks/shared.go
+++ b/backend/plugins/gitlab/tasks/shared.go
@@ -187,7 +187,7 @@ func GetMergeRequestsIterator(taskCtx
plugin.SubTaskContext, apiCollector *api.S
clauses := []dal.Clause{
dal.Select("gmr.gitlab_id, gmr.iid"),
dal.From("_tool_gitlab_merge_requests gmr"),
- // collect only openning merge request's notes and commits to
speed up the process
+ // collect only opening merge request's notes and commits to
speed up the process
dal.Where(
`gmr.project_id = ? and gmr.connection_id = ?`,
data.Options.ProjectId, data.Options.ConnectionId,
diff --git
a/backend/plugins/issue_trace/tasks/issue_status_history_convertor.go
b/backend/plugins/issue_trace/tasks/issue_status_history_convertor.go
index 42bfd0fd3..efe8ddb4f 100644
--- a/backend/plugins/issue_trace/tasks/issue_status_history_convertor.go
+++ b/backend/plugins/issue_trace/tasks/issue_status_history_convertor.go
@@ -219,7 +219,7 @@ func ConvertIssueStatusHistory(taskCtx
plugin.SubTaskContext) errors.Error {
}
}
}
- logger.Info("issues status history covert successfully")
+ logger.Info("issues status history converted successfully")
return nil
}
diff --git a/backend/plugins/jenkins/models/build.go
b/backend/plugins/jenkins/models/build.go
index 203d81d00..b38aaa15a 100644
--- a/backend/plugins/jenkins/models/build.go
+++ b/backend/plugins/jenkins/models/build.go
@@ -35,7 +35,7 @@ type JenkinsBuild struct {
Number int64 `gorm:"index"`
Result string // Result
Timestamp int64 // start time
- StartTime time.Time // convered by timestamp
+ StartTime time.Time // converted by timestamp
Type string `gorm:"index;type:varchar(255)"`
Class string `gorm:"index;type:varchar(255)" `
TriggeredBy string `gorm:"type:varchar(255)"`
diff --git
a/backend/plugins/jenkins/models/migrationscripts/20220916_modify_jenkins_build.go
b/backend/plugins/jenkins/models/migrationscripts/20220916_modify_jenkins_build.go
index c0d49a0e9..ec4e33229 100644
---
a/backend/plugins/jenkins/models/migrationscripts/20220916_modify_jenkins_build.go
+++
b/backend/plugins/jenkins/models/migrationscripts/20220916_modify_jenkins_build.go
@@ -40,7 +40,7 @@ type jenkinsBuild20220916Before struct {
Number int64 `gorm:"primaryKey"`
Result string // Result
Timestamp int64 // start time
- StartTime time.Time // convered by timestamp
+ StartTime time.Time // converted by timestamp
Type string `gorm:"index;type:varchar(255)"`
Class string `gorm:"index;type:varchar(255)" `
TriggeredBy string `gorm:"type:varchar(255)"`
@@ -63,7 +63,7 @@ type jenkinsBuild20220916After struct {
Number int64 `gorm:"index"`
Result string // Result
Timestamp int64 // start time
- StartTime time.Time // convered by timestamp
+ StartTime time.Time // converted by timestamp
Type string `gorm:"index;type:varchar(255)"`
Class string `gorm:"index;type:varchar(255)" `
TriggeredBy string `gorm:"type:varchar(255)"`
diff --git
a/backend/plugins/jenkins/models/migrationscripts/20221131_add_fullName_for_builds.go
b/backend/plugins/jenkins/models/migrationscripts/20221131_add_fullName_for_builds.go
index a7c0692c2..2802835e0 100644
---
a/backend/plugins/jenkins/models/migrationscripts/20221131_add_fullName_for_builds.go
+++
b/backend/plugins/jenkins/models/migrationscripts/20221131_add_fullName_for_builds.go
@@ -48,7 +48,7 @@ type jenkinsBuild20221131Before struct {
Number int64 `gorm:"index"`
Result string // Result
Timestamp int64 // start time
- StartTime time.Time // convered by timestamp
+ StartTime time.Time // converted by timestamp
Type string `gorm:"index;type:varchar(255)"`
Class string `gorm:"index;type:varchar(255)" `
TriggeredBy string `gorm:"type:varchar(255)"`
@@ -72,7 +72,7 @@ type jenkinsBuild20221131After struct {
Number int64 `gorm:"index"`
Result string // Result
Timestamp int64 // start time
- StartTime time.Time // convered by timestamp
+ StartTime time.Time // converted by timestamp
Type string `gorm:"index;type:varchar(255)"`
Class string `gorm:"index;type:varchar(255)" `
TriggeredBy string `gorm:"type:varchar(255)"`
diff --git a/backend/plugins/jenkins/models/migrationscripts/archived/build.go
b/backend/plugins/jenkins/models/migrationscripts/archived/build.go
index 4659e0cc2..70bc781f6 100644
--- a/backend/plugins/jenkins/models/migrationscripts/archived/build.go
+++ b/backend/plugins/jenkins/models/migrationscripts/archived/build.go
@@ -35,7 +35,7 @@ type JenkinsBuild struct {
Number int64 `gorm:"primaryKey"`
Result string // Result
Timestamp int64 // start time
- StartTime time.Time // convered by timestamp
+ StartTime time.Time // converted by timestamp
CommitSha string `gorm:"type:varchar(255)"`
}
diff --git
a/backend/plugins/jira/models/migrationscripts/20220716_add_init_tables.go
b/backend/plugins/jira/models/migrationscripts/20220716_add_init_tables.go
index d27726f76..6f161976c 100644
--- a/backend/plugins/jira/models/migrationscripts/20220716_add_init_tables.go
+++ b/backend/plugins/jira/models/migrationscripts/20220716_add_init_tables.go
@@ -45,7 +45,7 @@ type jiraConnection20220716Before struct {
StoryPointField string `gorm:"type:varchar(50);"
json:"storyPointField"`
RemotelinkCommitShaPattern string
`gorm:"type:varchar(255);comment='golang regexp, the first group will be
recognized as commit sha, ref https://github.com/google/re2/wiki/Syntax'"
json:"remotelinkCommitShaPattern"`
Proxy string `json:"proxy"`
- RateLimit int `comment:"api request rate limt
per hour" json:"rateLimit"`
+ RateLimit int `comment:"api request rate limit
per hour" json:"rateLimit"`
}
type addInitTables20220716 struct{}
diff --git a/backend/plugins/jira/models/migrationscripts/archived/source.go
b/backend/plugins/jira/models/migrationscripts/archived/source.go
index 2ceed1613..951eba5b5 100644
--- a/backend/plugins/jira/models/migrationscripts/archived/source.go
+++ b/backend/plugins/jira/models/migrationscripts/archived/source.go
@@ -30,7 +30,7 @@ type JiraSource struct {
StoryPointField string `gorm:"type:varchar(50);"
json:"storyPointField"`
RemotelinkCommitShaPattern string
`gorm:"type:varchar(255);comment='golang regexp, the first group will be
recognized as commit sha, ref https://github.com/google/re2/wiki/Syntax'"
json:"remotelinkCommitShaPattern"`
Proxy string `json:"proxy"`
- RateLimit int `comment:"api request rate limt per
second"`
+ RateLimit int `comment:"api request rate limit per
second"`
}
type JiraIssueTypeMapping struct {
diff --git a/backend/plugins/jira/tasks/issue_extractor.go
b/backend/plugins/jira/tasks/issue_extractor.go
index d25423b91..946207113 100644
--- a/backend/plugins/jira/tasks/issue_extractor.go
+++ b/backend/plugins/jira/tasks/issue_extractor.go
@@ -163,7 +163,7 @@ func extractIssues(data *JiraTaskData, mappings
*typeMappings, apiIssue *apiv2mo
if value, ok :=
mappings.StandardStatusMappings[issue.Type][issue.StatusKey]; ok {
issue.StdStatus = value.StandardStatus
}
- // issue commments
+ // issue comments
results = append(results, issue)
for _, comment := range comments {
results = append(results, comment)
diff --git a/backend/plugins/linker/impl/impl.go
b/backend/plugins/linker/impl/impl.go
index 917b22df9..e2efb809d 100644
--- a/backend/plugins/linker/impl/impl.go
+++ b/backend/plugins/linker/impl/impl.go
@@ -42,7 +42,7 @@ var _ interface {
type Linker struct{}
func (p Linker) Description() string {
- return "link some cross table datas together"
+ return "link some cross table data together"
}
// RequiredDataEntities hasn't been used so far
diff --git a/backend/plugins/refdiff/tasks/refdiff_task_data.go
b/backend/plugins/refdiff/tasks/refdiff_task_data.go
index 782bb9214..c4bc074a2 100644
--- a/backend/plugins/refdiff/tasks/refdiff_task_data.go
+++ b/backend/plugins/refdiff/tasks/refdiff_task_data.go
@@ -121,7 +121,7 @@ func (rs RefsReverseSemver) Swap(i, j int) {
func CalculateTagPattern(db dal.Dal, tagsPattern string, tagsLimit int,
tagsOrder string) (Refs, errors.Error) {
rs := Refs{}
- // caculate Pattern part
+ // calculate Pattern part
if tagsPattern == "" || tagsLimit <= 1 {
return rs, nil
}
@@ -176,7 +176,7 @@ func CalculateCommitPairs(db dal.Dal, repoId string, pairs
[]models.RefPair, rs
commitPairs = append(commitPairs, [4]string{rs[i-1].CommitSha,
rs[i].CommitSha, rs[i-1].Name, rs[i].Name})
}
- // caculate pairs part
+ // calculate pairs part
// convert ref pairs into commit pairs
ref2sha := func(refName string) (string, error) {
ref := &code.Ref{}
@@ -186,7 +186,7 @@ func CalculateCommitPairs(db dal.Dal, repoId string, pairs
[]models.RefPair, rs
ref.Id = fmt.Sprintf("%s:%s", repoId, refName)
err := db.First(ref)
if err != nil && !db.IsErrorNotFound(err) {
- return "", errors.NotFound.Wrap(err, fmt.Sprintf("faild
to load Ref info for repoId:%s, refName:%s", repoId, refName))
+ return "", errors.NotFound.Wrap(err,
fmt.Sprintf("failed to load Ref info for repoId:%s, refName:%s", repoId,
refName))
}
return ref.CommitSha, nil
}
diff --git a/backend/plugins/tapd/api/blueprint_v200.go
b/backend/plugins/tapd/api/blueprint_v200.go
index 676ed30cf..abb8695a7 100644
--- a/backend/plugins/tapd/api/blueprint_v200.go
+++ b/backend/plugins/tapd/api/blueprint_v200.go
@@ -96,7 +96,7 @@ func makeScopesV200(
// get workspace and scope config from db
tapdWorkspace, scopeConfig := scopeDetail.Scope,
scopeDetail.ScopeConfig
- // add wrokspace to scopes
+ // add workspace to scopes
if utils.StringsContains(scopeConfig.Entities,
plugin.DOMAIN_TYPE_TICKET) {
id := idgen.Generate(connection.ID, tapdWorkspace.Id)
board := ticket.NewBoard(id, tapdWorkspace.Name)
diff --git
a/backend/plugins/tapd/models/migrationscripts/archived/tapd_connection.go
b/backend/plugins/tapd/models/migrationscripts/archived/tapd_connection.go
index b07d89d6a..9af60b102 100644
--- a/backend/plugins/tapd/models/migrationscripts/archived/tapd_connection.go
+++ b/backend/plugins/tapd/models/migrationscripts/archived/tapd_connection.go
@@ -39,7 +39,7 @@ type RestConnection struct {
BaseConnection `mapstructure:",squash"`
Endpoint string `mapstructure:"endpoint" validate:"required"
json:"endpoint"`
Proxy string `mapstructure:"proxy" json:"proxy"`
- RateLimitPerHour int `comment:"api request rate limt per hour"
json:"rateLimit"`
+ RateLimitPerHour int `comment:"api request rate limit per hour"
json:"rateLimit"`
}
type TapdConnection struct {
diff --git
a/backend/plugins/zentao/models/migrationscripts/archived/connection.go
b/backend/plugins/zentao/models/migrationscripts/archived/connection.go
index dab806d98..7bee6db34 100644
--- a/backend/plugins/zentao/models/migrationscripts/archived/connection.go
+++ b/backend/plugins/zentao/models/migrationscripts/archived/connection.go
@@ -46,7 +46,7 @@ type RestConnection struct {
BaseConnection `mapstructure:",squash"`
Endpoint string `mapstructure:"endpoint" validate:"required"
json:"endpoint"`
Proxy string `mapstructure:"proxy" json:"proxy"`
- RateLimitPerHour int `comment:"api request rate limt per hour"
json:"rateLimit"`
+ RateLimitPerHour int `comment:"api request rate limit per hour"
json:"rateLimit"`
}
type BaseConnection struct {
diff --git a/backend/plugins/zentao/tasks/execution_summary_dev_extractor.go
b/backend/plugins/zentao/tasks/execution_summary_dev_extractor.go
index 7d3fd24a3..b77ab4f26 100644
--- a/backend/plugins/zentao/tasks/execution_summary_dev_extractor.go
+++ b/backend/plugins/zentao/tasks/execution_summary_dev_extractor.go
@@ -33,7 +33,7 @@ var ExtractExecutionSummaryDevMeta = plugin.SubTaskMeta{
Name: "extractExecutionSummaryDev",
EntryPoint: ExtractExecutionSummaryDev,
EnabledByDefault: true,
- Description: "extract Zentao execution summary from build-in page
api",
+ Description: "extract Zentao execution summary from built-in page
api",
DomainTypes: []string{plugin.DOMAIN_TYPE_TICKET},
}
diff --git a/backend/plugins/zentao/tasks/task_collector.go
b/backend/plugins/zentao/tasks/task_collector.go
index ea8a3976a..ac0d9d7e6 100644
--- a/backend/plugins/zentao/tasks/task_collector.go
+++ b/backend/plugins/zentao/tasks/task_collector.go
@@ -88,7 +88,7 @@ func CollectTask(taskCtx plugin.SubTaskContext) errors.Error {
// extract task's children
childTasks, err := extractChildrenWithDFS(task)
if err != nil {
- return nil,
errors.Default.New(fmt.Sprintf("extract task: %v chidren err: %v", task, err))
+ return nil,
errors.Default.New(fmt.Sprintf("extract task: %v children err: %v", task, err))
}
for _, task := range childTasks {
allTaskRecords[task.Id] = task
diff --git a/backend/python/pydevlake/pydevlake/api.py
b/backend/python/pydevlake/pydevlake/api.py
index a50b71d94..cc47601cd 100644
--- a/backend/python/pydevlake/pydevlake/api.py
+++ b/backend/python/pydevlake/pydevlake/api.py
@@ -221,7 +221,7 @@ class Paginator:
"""
Extracts or compute the id of the next page from the response,
e.g. incrementing the value of `page` of a JSON body.
- This id will be suplied to the next request via `set_next_page_param`.
+ This id will be supplied to the next request via `set_next_page_param`.
Returning None indicates that the response is the last page.
"""
pass
diff --git a/backend/server/api/README.md b/backend/server/api/README.md
index 887a85b90..04273b385 100644
--- a/backend/server/api/README.md
+++ b/backend/server/api/README.md
@@ -18,7 +18,7 @@ limitations under the License.
### Summary
-Users can set pipepline plan by config-ui to create schedule jobs.
+Users can set pipeline plan by config-ui to create schedule jobs.
And config-ui will send blueprint request with cronConfig in crontab format.
### Cron Job
diff --git a/backend/server/services/blueprint.go
b/backend/server/services/blueprint.go
index 91bab6934..2fb05c3b7 100644
--- a/backend/server/services/blueprint.go
+++ b/backend/server/services/blueprint.go
@@ -258,7 +258,7 @@ func DeleteBlueprint(id uint64) errors.Error {
var blueprintReloadLock sync.Mutex
var bpCronIdMap map[uint64]cron.EntryID
-// ReloadBlueprints reloades cronjobs based on blueprints
+// ReloadBlueprints reloads cronjobs based on blueprints
func ReloadBlueprints() (err errors.Error) {
enable := true
isManual := false
diff --git a/backend/server/services/pipeline.go
b/backend/server/services/pipeline.go
index f6c770f9c..080028079 100644
--- a/backend/server/services/pipeline.go
+++ b/backend/server/services/pipeline.go
@@ -439,7 +439,7 @@ func CancelPipeline(pipelineId uint64) errors.Error {
pipeline.Status = models.TASK_CANCELLED
err = db.Update(pipeline)
if err != nil {
- return errors.Default.Wrap(err, "faile to update
pipeline")
+ return errors.Default.Wrap(err, "failed to update
pipeline")
}
// now, with RunPipelineInQueue being block and target pipeline
got updated
// we should update the related tasks as well
@@ -449,7 +449,7 @@ func CancelPipeline(pipelineId uint64) errors.Error {
dal.Where("pipeline_id = ?", pipelineId),
)
if err != nil {
- return errors.Default.Wrap(err, "faile to update
pipeline tasks")
+ return errors.Default.Wrap(err, "failed to update
pipeline tasks")
}
// the target pipeline is pending, no running, no need to
perform the actual cancel operation
return nil
@@ -563,7 +563,7 @@ func RerunPipeline(pipelineId uint64, task *models.Task)
(tasks []*models.Task,
rerunTasks = append(rerunTasks, rerunTask)
}
- // mark pipline rerun
+ // mark pipeline rerun
err = tx.UpdateColumn(&models.Pipeline{},
"status", models.TASK_RERUN,
dal.Where("id = ?", pipelineId),
diff --git a/backend/server/services/pipeline_runner.go
b/backend/server/services/pipeline_runner.go
index b16dc2dc3..0f215a061 100644
--- a/backend/server/services/pipeline_runner.go
+++ b/backend/server/services/pipeline_runner.go
@@ -106,8 +106,8 @@ func runPipeline(pipelineId uint64) errors.Error {
return NotifyExternal(pipelineId)
}
-// ComputePipelineStatus determines pipleline status by its latest(rerun
included) tasks statuses
-// 1. TASK_COMPLETED: all tasks were executed sucessfully
+// ComputePipelineStatus determines pipeline status by its latest(rerun
included) tasks statuses
+// 1. TASK_COMPLETED: all tasks were executed successfully
// 2. TASK_FAILED: SkipOnFail=false with failed task(s)
// 3. TASK_PARTIAL: SkipOnFail=true with failed task(s)
func ComputePipelineStatus(pipeline *models.Pipeline, isCancelled bool)
(string, errors.Error) {
diff --git a/backend/server/services/project.go
b/backend/server/services/project.go
index c030a9af5..a1b92b03a 100644
--- a/backend/server/services/project.go
+++ b/backend/server/services/project.go
@@ -106,7 +106,7 @@ func CreateProject(projectInput *models.ApiInputProject)
(*models.ApiOutputProje
return nil, err
}
- // create transaction to updte multiple tables
+ // create transaction to update multiple tables
var err errors.Error
tx := db.Begin()
defer func() {
diff --git a/backend/test/helper/client.go b/backend/test/helper/client.go
index 0af014b05..8c5c9d22a 100644
--- a/backend/test/helper/client.go
+++ b/backend/test/helper/client.go
@@ -345,7 +345,7 @@ func runWithTimeout(timeout time.Duration, f func() (bool,
errors.Error)) errors
select {
case <-timer:
if !resp.completed {
- return errors.Default.New(fmt.Sprintf("timed
out calling function after %d miliseconds", timeout.Milliseconds()))
+ return errors.Default.New(fmt.Sprintf("timed
out calling function after %d milliseconds", timeout.Milliseconds()))
}
return nil
case resp = <-resChan:
diff --git a/config-ui/src/plugins/register/github/transformation.tsx
b/config-ui/src/plugins/register/github/transformation.tsx
index ef9d4d747..1758f7e68 100644
--- a/config-ui/src/plugins/register/github/transformation.tsx
+++ b/config-ui/src/plugins/register/github/transformation.tsx
@@ -206,7 +206,7 @@ const renderCollapseItems = ({
label={
<>
<span style={{ marginRight: 4 }}>Issue Severity</span>
- <HelpTooltip content="Labels that match the RegEx will be set
as the serverity of an issue." />
+ <HelpTooltip content="Labels that match the RegEx will be set
as the severity of an issue." />
</>
}
>
diff --git
a/config-ui/src/routes/blueprint/detail/components/sync-policy/index.tsx
b/config-ui/src/routes/blueprint/detail/components/sync-policy/index.tsx
index 9efd012f1..9f7b24c73 100644
--- a/config-ui/src/routes/blueprint/detail/components/sync-policy/index.tsx
+++ b/config-ui/src/routes/blueprint/detail/components/sync-policy/index.tsx
@@ -76,7 +76,7 @@ export const SyncPolicy = ({
const cron = useMemo(() => getCron(isManual, cronConfig), [isManual,
cronConfig]);
- const [mintue, hour, day, month, week] = useMemo(() => cronConfig.split('
'), [cronConfig]);
+ const [minute, hour, day, month, week] = useMemo(() => cronConfig.split('
'), [cronConfig]);
const handleChangeFrequency = (e: RadioChangeEvent) => {
const value = e.target.value;
@@ -152,32 +152,32 @@ export const SyncPolicy = ({
<Space>
<Block title="Minute">
<Input
- value={mintue}
+ value={minute}
onChange={(e) => onChangeCronConfig([e.target.value, hour,
day, month, week].join(' '))}
/>
</Block>
<Block title="Hour">
<Input
value={hour}
- onChange={(e) => onChangeCronConfig([mintue,
e.target.value, day, month, week].join(' '))}
+ onChange={(e) => onChangeCronConfig([minute,
e.target.value, day, month, week].join(' '))}
/>
</Block>
<Block title="Day">
<Input
value={day}
- onChange={(e) => onChangeCronConfig([mintue, hour,
e.target.value, month, week].join(' '))}
+ onChange={(e) => onChangeCronConfig([minute, hour,
e.target.value, month, week].join(' '))}
/>
</Block>
<Block title="Month">
<Input
value={month}
- onChange={(e) => onChangeCronConfig([mintue, hour, day,
e.target.value, week].join(' '))}
+ onChange={(e) => onChangeCronConfig([minute, hour, day,
e.target.value, week].join(' '))}
/>
</Block>
<Block title="Week">
<Input
value={week}
- onChange={(e) => onChangeCronConfig([mintue, hour, day,
month, e.target.value].join(' '))}
+ onChange={(e) => onChangeCronConfig([minute, hour, day,
month, e.target.value].join(' '))}
/>
</Block>
</Space>
diff --git a/config-ui/src/routes/onboard/components/card.tsx
b/config-ui/src/routes/onboard/components/card.tsx
index 3c59ee082..d2250573d 100644
--- a/config-ui/src/routes/onboard/components/card.tsx
+++ b/config-ui/src/routes/onboard/components/card.tsx
@@ -32,7 +32,7 @@ interface Props {
}
export const OnboardCard = ({ style }: Props) => {
- const [oeprating, setOperating] = useState(false);
+ const [operating, setOperating] = useState(false);
const [version, setVersion] = useState(0);
const navigate = useNavigate();
@@ -91,7 +91,7 @@ export const OnboardCard = ({ style }: Props) => {
title: 'Permanently close this entry?',
content: 'You will not be able to get back to the onboarding session
again.',
okButtonProps: {
- loading: oeprating,
+ loading: operating,
},
okText: 'Confirm',
onOk: async () => {
diff --git a/config-ui/src/routes/onboard/index.tsx
b/config-ui/src/routes/onboard/index.tsx
index 061be2033..d29ed40fb 100644
--- a/config-ui/src/routes/onboard/index.tsx
+++ b/config-ui/src/routes/onboard/index.tsx
@@ -129,7 +129,7 @@ export const Onboard = ({ logo, title }: Props) => {
{[1, 2, 3].includes(step) && (
<S.Step>
{steps.map((it) => (
- <S.StepItem key={it.step} $actived={it.step === step}
$activedColor={colorPrimary}>
+ <S.StepItem key={it.step} $activated={it.step === step}
$activatedColor={colorPrimary}>
<span>{it.step}</span>
<span>{it.title}</span>
</S.StepItem>
diff --git a/config-ui/src/routes/onboard/styled.ts
b/config-ui/src/routes/onboard/styled.ts
index b8d95a9e7..3e083694c 100644
--- a/config-ui/src/routes/onboard/styled.ts
+++ b/config-ui/src/routes/onboard/styled.ts
@@ -49,7 +49,7 @@ export const Step = styled.ul`
margin-bottom: 50px;
`;
-export const StepItem = styled.li<{ $actived: boolean; $activedColor: string
}>`
+export const StepItem = styled.li<{ $activated: boolean; $activatedColor:
string }>`
display: flex;
align-items: center;
position: relative;
@@ -65,19 +65,19 @@ export const StepItem = styled.li<{ $actived: boolean;
$activedColor: string }>`
border: 1px solid rgba(0, 0, 0, 0.25);
border-radius: 50%;
- ${({ $actived, $activedColor }) =>
- $actived
+ ${({ $activated, $activatedColor }) =>
+ $activated
? `
color: #fff;
- background-color: ${$activedColor};
+ background-color: ${$activatedColor};
border: none;
`
: ''}
}
span:last-child {
- ${({ $actived }) =>
- $actived
+ ${({ $activated }) =>
+ $activated
? `
font-size: 24px;
font-weight: 600;`
diff --git a/grafana/_archive/DeliveryQuality(RequireJiraAndGitlabData).json
b/grafana/_archive/DeliveryQuality(RequireJiraAndGitlabData).json
index 92e2fef5e..4f384919b 100644
--- a/grafana/_archive/DeliveryQuality(RequireJiraAndGitlabData).json
+++ b/grafana/_archive/DeliveryQuality(RequireJiraAndGitlabData).json
@@ -621,7 +621,7 @@
"metricColumn": "none",
"queryType": "randomWalk",
"rawQuery": true,
- "rawSql": "SELECT\n timestamp(DATE_ADD(date(gitlab_created_at),
INTERVAL -$interval(date(gitlab_created_at))+1 DAY)) as time,\n
avg(review_rounds) as \"Pull Request Reveiw Round\"\nFROM\n
gitlab_merge_requests gmr\n LEFT JOIN jira_board_gitlab_projects jbgp ON
jbgp.gitlab_project_id = gmr.project_id\nWHERE\n state = 'merged'\n and
review_rounds > 0\n and jbgp.jira_board_id = $board_id\n and
$__timeFilter(gitlab_created_at)\nGROUP BY 1\nORDER BY 1",
+ "rawSql": "SELECT\n timestamp(DATE_ADD(date(gitlab_created_at),
INTERVAL -$interval(date(gitlab_created_at))+1 DAY)) as time,\n
avg(review_rounds) as \"Pull Request Review Round\"\nFROM\n
gitlab_merge_requests gmr\n LEFT JOIN jira_board_gitlab_projects jbgp ON
jbgp.gitlab_project_id = gmr.project_id\nWHERE\n state = 'merged'\n and
review_rounds > 0\n and jbgp.jira_board_id = $board_id\n and
$__timeFilter(gitlab_created_at)\nGROUP BY 1\nORDER BY 1",
"refId": "A",
"select": [
[
diff --git a/grafana/_archive/Gitlab.json b/grafana/_archive/Gitlab.json
index 1baf386b8..806613efb 100644
--- a/grafana/_archive/Gitlab.json
+++ b/grafana/_archive/Gitlab.json
@@ -56,7 +56,7 @@
},
"id": 48,
"options": {
- "content": "<div style=\"display: block;text-align: center;margin-top:
56px;\">\n <div style=\"display: inline-flex;\">\n <img
src=\"/grafana/public/img/lake/1.png\" alt=\"No.1\" width=\"56\">\n <p
style=\"font-size:24px; margin:10px; color:#BFC1C8;\"><b>MR Troughput and Pass
Rate<b></b></b></p><b><b>\n </div>\n</div>",
+ "content": "<div style=\"display: block;text-align: center;margin-top:
56px;\">\n <div style=\"display: inline-flex;\">\n <img
src=\"/grafana/public/img/lake/1.png\" alt=\"No.1\" width=\"56\">\n <p
style=\"font-size:24px; margin:10px; color:#BFC1C8;\"><b>MR Throughput and Pass
Rate<b></b></b></p><b><b>\n </div>\n</div>",
"mode": "html"
},
"pluginVersion": "8.0.6",
@@ -857,7 +857,7 @@
"metricColumn": "none",
"queryType": "randomWalk",
"rawQuery": true,
- "rawSql": "SELECT\n timestamp(DATE_ADD(date(gitlab_created_at),
INTERVAL -$interval(date(gitlab_created_at))+1 DAY)) as time,\n
avg(review_rounds) as \"Pull Request Reveiw Round\"\nFROM\n
gitlab_merge_requests gmr\nWHERE\n state = 'merged'\n and review_rounds > 0\n
and gmr.project_id = $repo_id\n and $__timeFilter(gitlab_created_at)\nGROUP
BY 1\nORDER BY 1",
+ "rawSql": "SELECT\n timestamp(DATE_ADD(date(gitlab_created_at),
INTERVAL -$interval(date(gitlab_created_at))+1 DAY)) as time,\n
avg(review_rounds) as \"Pull Request Review Round\"\nFROM\n
gitlab_merge_requests gmr\nWHERE\n state = 'merged'\n and review_rounds > 0\n
and gmr.project_id = $repo_id\n and $__timeFilter(gitlab_created_at)\nGROUP
BY 1\nORDER BY 1",
"refId": "A",
"select": [
[
diff --git a/grafana/dashboards/DORADebug.json
b/grafana/dashboards/DORADebug.json
index 935b035c1..1fc487e93 100644
--- a/grafana/dashboards/DORADebug.json
+++ b/grafana/dashboards/DORADebug.json
@@ -2641,7 +2641,7 @@
]
}
],
- "title": "Step 5 - check the median change lead time for each month in
Figure 4 (Compare the change_lead_time with the max ranks in GREEN before the
first occurence of ORANGE in each month)",
+ "title": "Step 5 - check the median change lead time for each month in
Figure 4 (Compare the change_lead_time with the max ranks in GREEN before the
first occurrence of ORANGE in each month)",
"type": "table"
},
{
@@ -3464,7 +3464,7 @@
"metricColumn": "none",
"queryType": "randomWalk",
"rawQuery": true,
- "rawSql": "with _deployments as(\n select\n distinct
d.cicd_deployment_id as deployment_id,\n d.result,\n d.environment,\n
d.finished_date,\n d.cicd_scope_id,\n pm.project_name\n from\n
cicd_deployment_commits d\n join project_mapping pm on d.cicd_scope_id =
pm.row_id\n and pm.`table` = 'cicd_scopes'\n where\n -- only result
needs to specified, not envioronment\n d.result = 'SUCCESS' -- choose your
project_name\n and pm.project_name in ($p [...]
+ "rawSql": "with _deployments as(\n select\n distinct
d.cicd_deployment_id as deployment_id,\n d.result,\n d.environment,\n
d.finished_date,\n d.cicd_scope_id,\n pm.project_name\n from\n
cicd_deployment_commits d\n join project_mapping pm on d.cicd_scope_id =
pm.row_id\n and pm.`table` = 'cicd_scopes'\n where\n -- only result
needs to specified, not environment\n d.result = 'SUCCESS' -- choose your
project_name\n and pm.project_name in ($pr [...]
"refId": "A",
"select": [
[
diff --git a/grafana/dashboards/DORADetails-ChangeFailureRate.json
b/grafana/dashboards/DORADetails-ChangeFailureRate.json
index 8048b282b..87e3b9779 100644
--- a/grafana/dashboards/DORADetails-ChangeFailureRate.json
+++ b/grafana/dashboards/DORADetails-ChangeFailureRate.json
@@ -627,7 +627,7 @@
"metricColumn": "none",
"queryType": "randomWalk",
"rawQuery": true,
- "rawSql": "with _deployments as(\n select\n distinct
d.cicd_deployment_id as deployment_id,\n d.result,\n d.environment,\n
d.finished_date,\n d.cicd_scope_id,\n pm.project_name\n from\n
cicd_deployment_commits d\n join project_mapping pm on d.cicd_scope_id =
pm.row_id\n and pm.`table` = 'cicd_scopes'\n where\n -- only result
needs to specified, not envioronment\n d.result = 'SUCCESS' -- choose your
project_name\n and pm.project_name in ($p [...]
+ "rawSql": "with _deployments as(\n select\n distinct
d.cicd_deployment_id as deployment_id,\n d.result,\n d.environment,\n
d.finished_date,\n d.cicd_scope_id,\n pm.project_name\n from\n
cicd_deployment_commits d\n join project_mapping pm on d.cicd_scope_id =
pm.row_id\n and pm.`table` = 'cicd_scopes'\n where\n -- only result
needs to specified, not environment\n d.result = 'SUCCESS' -- choose your
project_name\n and pm.project_name in ($pr [...]
"refId": "A",
"select": [
[
diff --git
a/grafana/dashboards/DemoHowFastDoWeRespondToCustomerRequirements.json
b/grafana/dashboards/DemoHowFastDoWeRespondToCustomerRequirements.json
index 5a4b1ccfa..c15b08caf 100644
--- a/grafana/dashboards/DemoHowFastDoWeRespondToCustomerRequirements.json
+++ b/grafana/dashboards/DemoHowFastDoWeRespondToCustomerRequirements.json
@@ -169,7 +169,7 @@
},
"id": 101,
"options": {
- "content": "<div>\n <img border=\"0\"
src=\"/grafana/public/img/lake/logo.png\" style=\"padding-bottom:20px\"
alt=\"Merico\" width=\"40\"></img>\n <h2 style=\"display:inline-block;\">MARI
Guide - Requirement Lead Time</h2>\n</div>\n\nSection |
Description\n:----------------- | :-------------\nMetric Definition | Total
duration of requirements from proposal to delivery. It can be divided by flow
status in the practice domain or project management system to count the time
share o [...]
+ "content": "<div>\n <img border=\"0\"
src=\"/grafana/public/img/lake/logo.png\" style=\"padding-bottom:20px\"
alt=\"Merico\" width=\"40\"></img>\n <h2 style=\"display:inline-block;\">MARI
Guide - Requirement Lead Time</h2>\n</div>\n\nSection |
Description\n:----------------- | :-------------\nMetric Definition | Total
duration of requirements from proposal to delivery. It can be divided by flow
status in the practice domain or project management system to count the time
share o [...]
"mode": "markdown"
},
"pluginVersion": "8.0.6",