Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encounter error "org.apache.shardingsphere.infra.exception.kernel.metadata.TableNotFoundException: Table or view '%s' does not exist" with Read-Write Splitting #31179

Closed
jonaskahn opened this issue May 9, 2024 · 5 comments

Comments

@jonaskahn
Copy link

jonaskahn commented May 9, 2024

Bug Report

Which version of ShardingSphere did you use?

5.4.1 -> 5.5.0

Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?

ShardingSphere-JDBC

Expected behavior

Read-Write Splitting does work well as version 5.3.2

Actual behavior

I setup 3 datasource (1 master, 2 slave) but it's all 1 database. My current setting (using Kotlin) work pretty well, but when I tried the version 5.4.1 and 5.5.0. It's completely broken, the error throw right after I query to DB while application (Spring boot) is starting: org.apache.shardingsphere.infra.exception.kernel.metadata.TableNotFoundException: Table or view '%s' does not exist.

Here is my config in version 5.3.2:

@Configuration
@ConditionalOnProperty(prefix = "app.datasource", name = ["mode"], havingValue = "multi")
class ShardingSphereDataSourceConfigurer : AbstractDatasourceConfigurer() {
    @Value("\${app.datasource.schema:public}")
    private val schema: String? = null

    @Value("\${app.datasource.database:boffice_mst}")
    private val database: String? = null

    @Value("\${app.datasource.host:localhost}")
    private val host: String? = null

    @Value("\${app.datasource.port:5432}")
    private val port: String? = null

    @Value("\${app.datasource.username:username}")
    private val username: String? = null

    @Value("\${app.datasource.password:password}")
    private val password: String? = null

    @Value("\${app.datasource.slave1.host:localhost}")
    private val sl1Host: String? = null

    @Value("\${app.datasource.slave1.port:5432}")
    private val sl1Port: String? = null

    @Value("\${app.datasource.slave2.host:localhost}")
    private val sl2Host: String? = null

    @Value("\${app.datasource.slave2.port:5432}")
    private val sl2Port: String? = null

    @Value("\${app.datasource.maximumPoolSize}")
    private val maximumPoolSize: Int? = null

    @Value("\${app.datasource.connectionTimeout}")
    private val connectionTimeout: Long? = null

    @Value("\${app.datasource.maxLifetime}")
    private val maxLifetime: Long? = null

    @Bean
    fun ignoreJPAReadOnlyTransactionBeanPostProcessor(): IgnoreJPAReadOnlyTransactionBeanPostProcessor {
        return IgnoreJPAReadOnlyTransactionBeanPostProcessor()
    }

    @Bean
    @Throws(SQLException::class)
    fun dataSource(): DataSource {
        return ShardingSphereDataSourceFactory.createDataSource(
            "mdo-database",
            createModeConfiguration(),
            createDataSourceMap(),
            createRuleConfiguration(),
            createProperties()
        )
    }

    private fun createProperties(): Properties {
        val result = Properties()
        result.setProperty(ConfigurationPropertyKey.SQL_SHOW.key, "true")
        return result
    }

    private fun createRuleConfiguration(): Collection<RuleConfiguration> {
        val result: MutableCollection<RuleConfiguration> = LinkedList()
        result.add(createReadwriteSplittingRuleConfiguration())
        return result
    }

    private fun createReadwriteSplittingRuleConfiguration(): ReadwriteSplittingRuleConfiguration {
        val dataSourceConfig = ReadwriteSplittingDataSourceRuleConfiguration(
            "standalone-read-write-ds",
            StaticReadwriteSplittingStrategyConfiguration(
                MASTER_SOURCE,
                listOf(SLAVE1_SOURCE, SLAVE2_SOURCE)
            ), null,
            "round_robin"
        )
        return ReadwriteSplittingRuleConfiguration(
            setOf(dataSourceConfig),
            java.util.Map.of("round_robin", AlgorithmConfiguration("ROUND_ROBIN", null))
        )
    }

    private fun createDataSourceMap(): Map<String, DataSource> {
        val result: MutableMap<String, DataSource> = LinkedHashMap()
        result[MASTER_SOURCE] = createDataSource(MASTER_SOURCE, host, port)
        result[SLAVE1_SOURCE] = createDataSource(SLAVE1_SOURCE, sl1Host, sl1Port)
        result[SLAVE2_SOURCE] = createDataSource(SLAVE2_SOURCE, sl2Host, sl2Port)
        return result
    }

    private fun createDataSource(datasourceId: String, host: String?, port: String?): DataSource {
        val dataSource = HikariDataSource()
        dataSource.driverClassName = "org.postgresql.Driver"
        dataSource.jdbcUrl = "jdbc:postgresql://${host}:${port}/${database}?stringtype=unspecified"
        dataSource.poolName = "DATASOURCE - $datasourceId"
        dataSource.username = username
        dataSource.password = password
        dataSource.schema = schema
        dataSource.maximumPoolSize = maximumPoolSize!!
        dataSource.connectionTimeout = connectionTimeout!!
        dataSource.maxLifetime = maxLifetime!!
        addMetaData(dataSource)
        return dataSource
    }

    private fun createModeConfiguration(): ModeConfiguration {
        return ModeConfiguration("Standalone", StandalonePersistRepositoryConfiguration("JDBC", Properties()))
    }

    companion object {
        private const val MASTER_SOURCE = "master"
        private const val SLAVE1_SOURCE = "slave1"
        private const val SLAVE2_SOURCE = "slave2"
    }
}

Here is my config in version 5.4.1 and 5.5.0:


@Configuration
@ConditionalOnProperty(prefix = "app.datasource", name = ["mode"], havingValue = "multi")
class ShardingSphereDataSourceConfigurer : AbstractDatasourceConfigurer() {
    @Value("\${app.datasource.schema:public}")
    private val schema: String? = null

    @Value("\${app.datasource.database:boffice_mst}")
    private val database: String? = null

    @Value("\${app.datasource.host:localhost}")
    private val host: String? = null

    @Value("\${app.datasource.port:5432}")
    private val port: String? = null

    @Value("\${app.datasource.username:username}")
    private val username: String? = null

    @Value("\${app.datasource.password:password}")
    private val password: String? = null

    @Value("\${app.datasource.slave1.host:localhost}")
    private val sl1Host: String? = null

    @Value("\${app.datasource.slave1.port:5432}")
    private val sl1Port: String? = null

    @Value("\${app.datasource.slave2.host:localhost}")
    private val sl2Host: String? = null

    @Value("\${app.datasource.slave2.port:5432}")
    private val sl2Port: String? = null

    @Value("\${app.datasource.maximumPoolSize}")
    private val maximumPoolSize: Int? = null

    @Value("\${app.datasource.connectionTimeout}")
    private val connectionTimeout: Long? = null

    @Value("\${app.datasource.maxLifetime}")
    private val maxLifetime: Long? = null

    @Bean
    fun ignoreJPAReadOnlyTransactionBeanPostProcessor(): IgnoreJPAReadOnlyTransactionBeanPostProcessor {
        return IgnoreJPAReadOnlyTransactionBeanPostProcessor()
    }

    @Bean
    @Throws(SQLException::class)
    fun dataSource(): DataSource {
        return ShardingSphereDataSourceFactory.createDataSource(
            "mdo-database",
            createModeConfiguration(),
            createDataSourceMap(),
            createRuleConfiguration(),
            createProperties()
        )
    }

    private fun createProperties(): Properties {
        val result = Properties()
        result.setProperty(ConfigurationPropertyKey.SQL_SHOW.key, "true")
        return result
    }

    private fun createRuleConfiguration(): Collection<RuleConfiguration> {
        val result: MutableCollection<RuleConfiguration> = LinkedList()
        result.add(createReadwriteSplittingRuleConfiguration())
        return result
    }

    private fun createReadwriteSplittingRuleConfiguration(): ReadwriteSplittingRuleConfiguration {
        val dataSourceConfig = ReadwriteSplittingDataSourceRuleConfiguration(
            "standalone-read-write-ds",
            MASTER_SOURCE,
            listOf(SLAVE1_SOURCE, SLAVE2_SOURCE),
            TransactionalReadQueryStrategy.DYNAMIC,
            "round_robin"
        )
        return ReadwriteSplittingRuleConfiguration(
            setOf(dataSourceConfig),
            java.util.Map.of("round_robin", AlgorithmConfiguration("ROUND_ROBIN", null))
        )
    }

    private fun createDataSourceMap(): Map<String, DataSource> {
        val result: MutableMap<String, DataSource> = LinkedHashMap()
        result[MASTER_SOURCE] = createDataSource(MASTER_SOURCE, host, port)
        result[SLAVE1_SOURCE] = createDataSource(SLAVE1_SOURCE, sl1Host, sl1Port)
        result[SLAVE2_SOURCE] = createDataSource(SLAVE2_SOURCE, sl2Host, sl2Port)
        return result
    }

    private fun createDataSource(datasourceId: String, host: String?, port: String?): DataSource {
        val dataSource = HikariDataSource()
        dataSource.driverClassName = "org.postgresql.Driver"
        dataSource.jdbcUrl = "jdbc:postgresql://${host}:${port}/${database}?stringtype=unspecified"
        dataSource.poolName = "DATASOURCE - $datasourceId"
        dataSource.username = username
        dataSource.password = password
        dataSource.schema = schema
        dataSource.maximumPoolSize = maximumPoolSize!!
        dataSource.connectionTimeout = connectionTimeout!!
        dataSource.maxLifetime = maxLifetime!!
        addMetaData(dataSource)
        return dataSource
    }

    private fun createModeConfiguration(): ModeConfiguration {
        return ModeConfiguration("Standalone", StandalonePersistRepositoryConfiguration("JDBC", Properties()))
    }

    companion object {
        private const val MASTER_SOURCE = "master"
        private const val SLAVE1_SOURCE = "slave1"
        private const val SLAVE2_SOURCE = "slave2"
    }
}

@airh2o
Copy link

airh2o commented May 10, 2024

so do i

@strongduanmu
Copy link
Member

Hi @jonaskahn @airh2o, you can try config single rule to load related table metadata. For more details, you can refer this doc - https://shardingsphere.apache.org/document/current/cn/faq/#%E5%8D%95%E8%A1%A8-table-or-view-s-does-not-exist-%E5%BC%82%E5%B8%B8%E5%A6%82%E4%BD%95%E8%A7%A3%E5%86%B3.

@jonaskahn
Copy link
Author

I'll try in next week

@Feng-xs
Copy link

Feng-xs commented May 14, 2024

"I'm encountering the same issue when trying to implement sharding rules."
sharding version: 5.5.0
springboot version: 3.2.1
image

@jonaskahn
Copy link
Author

jonaskahn commented May 14, 2024

Hi @jonaskahn @airh2o, you can try config single rule to load related table metadata. For more details, you can refer this doc - https://shardingsphere.apache.org/document/current/cn/faq/#%E5%8D%95%E8%A1%A8-table-or-view-s-does-not-exist-%E5%BC%82%E5%B8%B8%E5%A6%82%E4%BD%95%E8%A7%A3%E5%86%B3.

I confirmed this page will solve the issue.
If you're using Kotlin or Java, you can take a glance on my configuration:

    private fun createRuleConfiguration(): Collection<RuleConfiguration> {
        val result: MutableCollection<RuleConfiguration> = LinkedList()

        // Add new configuration to load single rule
        result.add(createSingleRuleConfiguration())

        result.add(createReadwriteSplittingRuleConfiguration())
        return result
    }

    private fun createSingleRuleConfiguration(): SingleRuleConfiguration {
        val rule = SingleRuleConfiguration()
        rule.tables = listOf("*.*")
        return rule
    }

Thanks @strongduanmu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants