FUNCTION BitAnd (Num AS Integer, Num2 PARAMS Integer[]) AS Integer
FUNCTION BitAnd (BinString AS Binary, BinString2 PARAMS Binary[]) AS Binary
FUNCTION BitOr (Num AS Integer, Num2 PARAMS Integer[]) AS Integer
FUNCTION BitOr (BinString AS Binary, BinString2 PARAMS Binary[]) AS Binary
FUNCTION BitXOr (Num AS Integer, Num2 PARAMS Integer[]) AS Integer
FUNCTION BitXOr (BinString AS Binary, BinString2 PARAMS Binary[]) AS Binary
FUNCTION BitNot (Num AS Integer) AS Integer
FUNCTION BitNot (BinString AS Binary) AS Binary
FUNCTION BitNot (BinString AS Binary, StartBit AS Integer, BitCount := 1 AS Integer) AS Binary
FUNCTION BitSet (Num AS Integer, Bit AS Integer) AS Integer
FUNCTION BitSet (BinString AS Binary, StartBit AS Integer, BitCount := 1 AS Integer) AS Binary
FUNCTION BitClear (Num AS Integer, Bit AS Integer) AS Integer
FUNCTION BitClear (BinString AS Binary, StartBit AS Integer, BitCount := 1 AS Integer) AS Binary
FUNCTION BitTest (Num AS Integer, Bit AS Integer) AS Logical
FUNCTION BitTest (BinString AS Binary, Bit AS Integer) AS Logical
FUNCTION BitLShift (Num AS Integer, BitShift AS Integer) AS Integer
FUNCTION BitRShift (Num AS Integer, BitShift AS Integer) AS Integer
I believe this covers all the expected variations of the functions. As you can see, some of your examples won't match against any of these declarations.
The problem with this approach is that in VFP almost everything is un-typed. So the parameters will often be of type USUAL.
The compiler will have no idea which of these functions to call.
I think the best solution is to create "top level" functions with USUAL parameter types. And then in these functions we can inspect the parameter types of the arguments and delegate the work to the right "worker function". And throw an error if an unexpected parameter type is passed.
FUNCTION BitNot (Num AS USUAL) AS USUAL
SWITCH UsualType(Num)
CASE __UsualType.Long
RETURN _BitNot( (LONG) Num)
CASE __UsualType.Int64 // should we allow this too ?
RETURN _BitNot( (INT64) Num)
CASE __UsualType.Binary
RETURN _BitNot( (BINARY) Num)
OTHERWISE
THROW Exception{"Unexpected type" }
END SWITCH
FUNCTION BitNot (BinString AS USUAL, StartBit AS Integer, BitCount := 1 AS Integer) AS USUAL
// more work to do
And since the version with an integer should return an integer and the version with a Binary should return a binary the only solution is to set the return type to USUAL
Robert
XSharp Development Team
The Netherlands
robert@xsharp.eu
It's a decision you'll have to make. These set of functions either work with numeric parameters to return numeric results or work with binary parameters to return binary results. If you want to accept USUAL types then it will be USUAL at both sides, parameters and results. This was what was done with STRCONV() that returns a USUAL result (String or Binary).
The BITNOT implementation (USUAL not taken into consideration):
FUNCTION BITNOT (Num AS Int) AS Int
RETURN ~Num
ENDFUNC
FUNCTION BITNOT (BinString AS Binary) AS Binary
RETURN BITNOT(BinString, 0, BinString.Length * 8)
ENDFUNC
FUNCTION BITNOT (BinString AS Binary, StartBit AS Int, BitCount := 1 AS Int) AS Binary
LOCAL Result := BinString AS Byte[]
LOCAL ByteIndex AS Int
LOCAL BitIndex := StartBit AS Int
LOCAL BitCounter AS Int
FOR BitCounter := 1 TO BitCount
ByteIndex := BitIndex / 8 + 1
Result[ByteIndex] := _Xor(Result[ByteIndex], 1 << BitIndex % 8)
BitIndex++
NEXT
RETURN (Binary)Result
ENDFUNC
? 0h170101001000 // output is a string and the value is "0h170101001000" ?
? 0h170101001000 + "addSomeText" // -> output is a string and the value is "0h170101001000616464536F6D6554657874" ?
? 0h110101001000 + "addSomeText" // -> output is a string and the value is "0h110101001000616464536F6D6554657874" ?
LOCAL N
LOCAL B
N := 10
B := 0hFFB0
? BITNOT(N)
? BITNOT(B)
The first display statement executes properly and calls the BITNOT(Int) AS Int, the second raises an error: 'XSharp.Error: 'Conversion.Error from USUAL(UNKNOWN) to LONGINT'.
My question is: why is X# trying to convert a USUAL to LONGINT in the first place? If X# knows the type of the underlying value in the second statement ("Q"), and if it's going to convert before calling the function, why not convert to Binary instead?
The literals prefixed with 0h are type Binary, which basically is a binary string (Byte[]). It's a VFP data type that X# implemented since version 2.7.
The Bit* functions work on this data type differently from how they work with numeric values. For instance, a BitOr operation in a Binary string may extend the length of the string. In some cases, the meaning of the parameters is not even the same in both implementations.
Hex literals are represented in VFP in the same way as in X#, except for the underscore, while there is no provision for 2-base literals.
The ? operator is not the best way to look at a value like a binary value.
It calls the AsString() function on its parameters, and there may be additional logic in AsString() that may change "what you see".
For example if you display a floating point number it will use either the internal precision and decimals (for floats) or it may use the SetDecimals() and SetFixed() settings to format the number. That may result in a value like "******" if the number does not fit in the preset width or it may hide decimals that are stored in a number. You may see 1.234 for a number that actually has the value 1.2336
I hope that Robert and the X# team will clarify the difference in behavior between the two data types (numeric and binary string). It will help to compose the implementation of other VFP functions.