-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not ADD ON CLUSTER when create __dbt_backup #205
Comments
I will take a look. Hi @ikeniborn, are you expecting the table to be created on cluster or not? If you are trying to use a clickhouse cluster, you should use |
I can reproduce the problem now. It seems to be a compatibility issue. model I suggest a |
In #206 my solution to this problem is that we provide a detailed message to reflect the error and make I am not sure this is the best way. so any comments, suggestions are welcomed. |
Hi, @gfunc I reinstall dbt-clickhouse on 1.4.8 where all work without error. I have 1 shard and 2 replicas on cluster. I dont want used now distributed table. I want only create table on cluster. When claster will have more 1 shard i refactoring all models to dictributed_table |
Describe the bug
Steps to reproduce
clickhouse:
target: default
outputs:
default:
driver: native
type: clickhouse
schema: default
user: "{{ env_var('DBT_ENV_SECRET_USER') }}"
password: "{{ env_var('DBT_ENV_SECRET_PASSWORD') }}"
#optional fields
port: 9000
host: "{{ env_var('DBT_ENV_SECRET_HOST') }}"
verify: False
secure: False
connect_timeout: 60
# compression: 'gzip'
threads: 8
send_receive_timeout: 100000
check_exchange: False
cluster: "{cluster}"
cluster_mode: False
# Native (clickhouse-driver) connection settings
sync_request_timeout: 5
compress_block_size: 1048576
use_lw_deletes: True
custom_settings:
enable_optimize_predicate_expression: 1
max_block_size: 65536
max_insert_block_size: 2097152
max_memory_usage: 130000000000
max_bytes_before_external_group_by: 100000000000
max_bytes_before_external_sort: 50000000000
max_threads: 128
max_insert_threads: 64
max_query_size: 524288
async_insert: 1
async_insert_threads: 64
Expected behaviour
{{ config(
enabled = true,
schema = 'dimension',
tags = ["dimension"],
materialized = "table",
engine = "MergeTree()",
order_by = ("twitter_pinned_tweet_id"),
) }}
select
pinned_tweet_id as twitter_pinned_tweet_id
,pinned_tweet_text as twitter_pinned_tweet_text
-- ,vector(pinned_tweet_text) as twitter_pinned_tweet_text_vector
,pinned_tweet_created_at as twitter_pinned_tweet_created_at
,date_diff('day',pinned_tweet_created_at, now()) as twitter_pinned_tweet_day_old
,now() as updated_dttm
from
{{ref("raw_twitter_pinned_tweets")}}
order by
updated_dttm desc
limit 1 by
pinned_tweet_id
Code examples, such as models or profile settings
dbt and/or ClickHouse server logs
[0m12:50:07.068025 [debug] [Thread-1 ]: dbt_clickhouse adapter: On model.clickhouse.dim_twitter_pinned_tweet: /* {"app": "dbt", "dbt_version": "1.4.9", "profile_name": "clickhouse", "target_name": "*****", "node_id": "model.clickhouse.dim_twitter_pinned_tweet"} */
�[0m12:50:07.254117 [debug] [Thread-1 ]: dbt_clickhouse adapter: SQL status: OK in 0.19 seconds
�[0m12:50:07.338073 [debug] [Thread-1 ]: dbt_clickhouse adapter: On model.clickhouse.dim_twitter_pinned_tweet: /* {"app": "dbt", "dbt_version": "1.4.9", "profile_name": "clickhouse", "target_name": "*****", "node_id": "model.clickhouse.dim_twitter_pinned_tweet"} */
engine = MergeTree()
order by (twitter_pinned_tweet_id)
select
pinned_tweet_id as twitter_pinned_tweet_id
,pinned_tweet_text as twitter_pinned_tweet_text
-- ,vector(pinned_tweet_text) as twitter_pinned_tweet_text_vector
,pinned_tweet_created_at as twitter_pinned_tweet_created_at
,date_diff('day',pinned_tweet_created_at, now()) as twitter_pinned_tweet_day_old
,now() as updated_dttm
from
raw.raw_twitter_pinned_tweets
order by
updated_dttm desc
limit 1 by
pinned_tweet_id
)
...
�[0m12:50:07.429639 [debug] [Thread-1 ]: dbt_clickhouse adapter: SQL status: OK in 0.09 seconds
�[0m12:50:07.460572 [debug] [Thread-1 ]: dbt_clickhouse adapter: On model.clickhouse.dim_twitter_pinned_tweet: /* {"app": "dbt", "dbt_version": "1.4.9", "profile_name": "clickhouse", "target_name": "*****", "node_id": "model.clickhouse.dim_twitter_pinned_tweet"} */
...
�[0m12:50:07.529162 [debug] [Thread-1 ]: dbt_clickhouse adapter: SQL status: OK in 0.07 seconds
�[0m12:50:07.549063 [debug] [Thread-1 ]: Writing runtime sql for node "model.clickhouse.dim_twitter_pinned_tweet"
�[0m12:50:07.550210 [debug] [Thread-1 ]: dbt_clickhouse adapter: On model.clickhouse.dim_twitter_pinned_tweet: /* {"app": "dbt", "dbt_version": "1.4.9", "profile_name": "clickhouse", "target_name": "*****", "node_id": "model.clickhouse.dim_twitter_pinned_tweet"} */
select
pinned_tweet_id as twitter_pinned_tweet_id
,pinned_tweet_text as twitter_pinned_tweet_text
-- ,vector(pinned_tweet_text) as twitter_pinned_tweet_text_vector
,pinned_tweet_created_at as twitter_pinned_tweet_created_at
,date_diff('day',pinned_tweet_created_at, now()) as twitter_pinned_tweet_day_old
,now() as updated_dttm
from
raw.raw_twitter_pinned_tweets
order by
updated_dttm desc
limit 1 by
pinned_tweet_id
...
�[0m12:50:07.686856 [debug] [Thread-1 ]: dbt_clickhouse adapter: SQL status: OK in 0.14 seconds
�[0m12:50:07.706430 [debug] [Thread-1 ]: dbt_clickhouse adapter: On model.clickhouse.dim_twitter_pinned_tweet: /* {"app": "dbt", "dbt_version": "1.4.9", "profile_name": "clickhouse", "target_name": "*****", "node_id": "model.clickhouse.dim_twitter_pinned_tweet"} */
EXCHANGE TABLES dimension.dim_twitter_pinned_tweet__dbt_backup AND dimension.dim_twitter_pinned_tweet
...
�[0m12:50:07.902549 [debug] [Thread-1 ]: dbt_clickhouse adapter: Error running SQL: /* {"app": "dbt", "dbt_version": "1.4.9", "profile_name": "clickhouse", "target_name": "*****", "node_id": "model.clickhouse.dim_twitter_pinned_tweet"} */
EXCHANGE TABLES dimension.dim_twitter_pinned_tweet__dbt_backup AND dimension.dim_twitter_pinned_tweet
�[0m12:50:07.903325 [debug] [Thread-1 ]: Timing info for model.clickhouse.dim_twitter_pinned_tweet (execute): 2023-11-08 12:50:06.967194 => 2023-11-08 12:50:07.903201
�[0m12:50:07.907147 [debug] [Thread-1 ]: Database Error in model dim_twitter_pinned_tweet (models/dimension/twitter/dim_twitter_pinned_tweet.sql)
Code: 60.
DB::Exception: There was an error on [10.10.1.217:9000]: Code: 60. DB::Exception: Table
dimension
.dim_twitter_pinned_tweet__dbt_backup
doesn't exist. (UNKNOWN_TABLE) (version 23.10.1.1976 (official build)). Stack trace:compiled Code at target/run/clickhouse/models/dimension/twitter/dim_twitter_pinned_tweet.sql
Configuration
Environment
ClickHouse server
CREATE TABLE
statements for tables involved:The text was updated successfully, but these errors were encountered: