3.00.0

To use the 3.00.0 DolphinDB server, please upgrade with the latest dolphindb.dos file.

Version 3.00.0.1

System Impacts Caused by Bug Fixes

Introduced a change regarding the permissions for shell function calls by administrators:

  • In previous release, there is no restriction for administrators to call the shell function.

  • Since this release, calling shell functions is not allowed for any user by default, including administrators. To call the shell function, you must set the configuration parameter enableShellFunction to true.

Version 3.00.0.0

Changes Made to Match Industry Practices

  • Changed the rounding behavior when converting floating-point number to DECIMAL type from truncation to rounding.

    decimal128(0.000009,5)
    // Output (previous releases): 0.00000
    // Output (current release): 0.00001
  • Change in the behavior of handling function alias:
    • In 200 releases, functions were processed using their original names;
    • Since this release, function are processed using aliases defined during function definition.

    Example 1: Calling pow (alias for power) in a lambda expression results in a change in the function name in the result.

    def f1(x){return def(k): pow(x,k)}
    f1(1)
    //Output (200 releases): {def (x, k)->power(x, k)}{1}
    //Output (current release): {def (x, k)->pow(x, k)}{1}

    Example 2: Calling mean (alias for avg) in a SQL statement results in a change in the generated column names.

    t=table([1,2,3,4] as a,[2,3,45,6] as b)
    select mean(b) from t
    
    //Output (200 releases):
    /*
    avg_b
    14
    */
    
    //Output (current release):
    /*
    mean_b
    14
    */

System Impacts Caused by Bug Fixes

  • Modified asof join with multiple join columns if the left or right table is a partitioned table and the first (n-1) join columns did not include all partitioning columns:

    • In previous releases, the join operation was allowed.

    • Since this release, an error is thrown.

    // Create partitioned tables pt1 and pt2
    if(existsDatabase("dfs://aj_test")) dropDatabase("dfs://aj_test")
    db=database("dfs://aj_test", VALUE, 2023.01M..2024.01M, engine='TSDB')
    
    sym=take(`a`b, 20)
    date=2023.01.01 + (1..20) * 5
    value= rand(10.0, 20)
    t1 = table(sym, date, value)
    
    sym=take(`a`b, 20)
    date=2023.01.02 + (1..20) * 5
    price= rand(10.0, 20)
    t2 = table(sym, date, price)
    
    pt1 = db.createPartitionedTable(t1, "pt1", `date, sortColumns=`sym`date).append!(t1)
    pt2 = db.createPartitionedTable(t2, "pt2", `date, sortColumns=`sym`date).append!(t2)
    
    // asof join on pt1 and pt2
    select * from aj(pt1, pt2, `sym`date)
    
    // Previous releases returned the join result
    // Current release throws an error: In asof join (aj), if the left or right table is a partitioned table, the matching columns except the last one must include all partitioning columns.
  • In previous releases, passing a 0-column matrix to an aggregate or vector function was allowed. Since this release, an error is thrown.

    input=matrix(INT,1,0)
    valueChanged(input)
    
    // Previous releases returned the result
    // Current release throws an error: The column number of matrix must be greater than 0.
  • Behavior change of function at(X, [index]) when X is a function and index is a tuple:

    • In previous releases, the tuple was passed as an argument to X.

    • Since this release, each element of the tuple is passed as an argument to X.

    def myFunc(x, y=10){
        return x+y
    }
    at(myFunc, (1,2))
    
    // Output (previous releases): (11,12)
    // Output (current release): 3
    To replicate previous behavior, nest the tuple as the only element within another tuple before passing the argument. For example:
    def myFunc(x, y=10){
        return x+y
    }
    at(myFunc, ().append!((1,2)))
    // Output: (11,12)
  • In previous releases, the SQL predicate IN could access columns in DFS tables. Since this release, such columns can only be accessed using a subquery.

    // create DFS tables
    if(existsDatabase("dfs://in_test")){
    	dropDatabase("dfs://in_test")
    }
    db=database("dfs://in_test", VALUE, 1..10,,'TSDB')
    t1=table([1 ,1, 2 ,2] as id, "A"+string(1..4) as sym, rand(10.0,4) as val)
    t2=table([1 ,1, 2 ,3] as uid, "A"+string(1..4) as sym, rand(10.0,4) as val)
    pt1=db.createPartitionedTable(t1, `pt1,`id,,`sym).append!(t1)
    pt2=db.createPartitionedTable(t2, `pt2,`uid,,`sym).append!(t2)
    
    // Query
    select * from pt1 left join pt2 on pt1.id ==pt2.uid where pt1.id in (case when pt1.sym="A1" then 1 end)
    
    // Previous releases returned query results
    // Current release reports an error: The '[not] in' predicate cannot be followed by columns from the partitioned table. Please use a subquery instead.
    // Use a subquery
    select * from pt1 left join pt2 on pt1.id ==pt2.uid where pt1.id in (select case when sym="A1" then 1 end from pt1)
  • Modified the behavior of functions milastNot, mifirstNot,mimax, mimin, mfirstNot, mlastNot, mifirstNot, milastNot, mimaxLast, miminLast if X is an indexed series or indexed matrix.

    • When the index contains NULL values, previous releases allowed such operation, while an error is reported since this release.

      data = indexedSeries(NULL 2014.01.12 2014.01.13 2014.01.14 2014.01.15 2014.01.16, 1..6)
      milastNot(data, 4)
      
      // Output (previous releases):
      
      label	col1
      	
      2014.01.12	
      2014.01.13	
      2014.01.14	3
      2014.01.15	3
      2014.01.16	3
      
      // Output (current release): The row index of the matrix can't contain null if window is a time offset.
    • In previous releases, these functions slided a window by row, which was inconsistent with the time-based windowing logic for indexed data forms. The current release fixes this issue.

      data = indexedSeries(2014.01.11 2014.01.12 2014.01.13 2014.01.14 2014.01.15 2014.01.16, 1 NULL 3 NULL 4 5)
      milastNot(data, 4)
      
      // Output (previous releases):
      label	col1
      2014.01.11	
      2014.01.12	
      2014.01.13	
      2014.01.14	2
      2014.01.15	3
      2014.01.16	3
      
      // Output (current release):
      label	col1
      2014.01.11	0
      2014.01.12	0
      2014.01.13	2
      2014.01.14	2
      2014.01.15	3
      2014.01.16	3